Most of them were noise from the type const correctness pass, but the
qc field function handling was always dubious (though in practice safe
due to how the type was built, but...). The remaining casts outside of
type.c need some thought.
While they might be ok, I expect them to cause some issues when doing
compile-time evaluations of type expressions, so use of dags seems to be
a premature optimization. However, as the "no dags" flag is propagated
to parent expression nodes, it may prove useful in other contexts.
This gets the types such that either there is only one definition, or C
sees the same name for what is essentially the same type despite there
being multiple local definitions.
It's disabled by default because it's a runtime thing and I'm not sure I
want to keep it enabled, but it did find some issues (which I've cleaned
up), although it didn't find the problem I was looking for :P
Two birds with one stone: eliminates most of the problems with going
const-correct with expr_t, and it make dealing with internally generated
expressions at random locations much easier as the set source location
affects all new expressions created within that scope, to any depth.
Debug output is much easier to read now.
There were a few places where some const-casts were needed, but they're
localized to code that's supposed to manipulate types (but I do want to
come up with something to clean that up).
I'm not sure the regressive product is right (overall sign), but that's
actually partly a problem in the math itself (duals and the regressive
product still get poked at, so it may be just a matter of
interpretation).
This makes a slight improvement to the commutator product in that it
removes the expand statement, but there's still the problem of (a+a)/2.
However, at least now the product is correct and slightly less abysmal.
This takes advantage of evaluate_constexpr to do all the work. Necessary
for use of basis blade constants in algebra contexts (avoids an internal
error).
I had wanted to do this earlier but shied away from the large edit. Now
it became more necessary (and will become even more necessary when I get
to the glsl front-end).
It's now meant only for ALLOC. Interestingly, when DEBUG_QF_MEMORY is
defined in expr.c, something breaks badly with vkgen (no sniffles out of
valgrind, though), but everything is fine with it not defined. It seems
there may be some unpleasant UB going on somewhere.
This fixes the motor test :) It turns out that every lead I had
previously was due to the disabling of that feature "breaking" dags
(such that expressions wouldn't be found) and it was the dagged
multi-vector components getting linked by expr->next that made a mess of
things.
Or at least mostly so (there are a few casts). This doesn't fix the
motor bug, but I've wanted to do this for over twenty years and at least
I know what's not causing the bug. However, disabling fold_constants in
expr_algebra.c does "fix" things, so it's still a good place to look.
Especially binary expressions. That expressions can now be reused is
what caused the need to make expression lists non-invasive: the reuse
resulted in loops in the lists. This doesn't directly affect code
generation at this stage but it will help with optimizing algebraic
expressions.
The dags are per sequence point (as per my reading of the C spec).
The removal of the e tag from expr_t necessitated making convert_name
return a new expressions which resulted in get_type no longer being
enough to both convert a name expression and get the type. This was just
another missed spot. With this, all of game-source except ctf builds.
Finally, that little e. is cleaned up. convert_name was a bit of a pain
(in that it relied on modifying the expression rather than returning a
new one, or more that such behavior was relied on).
That is, passing int constants through ... in Ruamoko progs is no longer
a warning (still is for v6p and v6 progs). I got tired of getting the
warning for sizeof expressions when int through ... hasn't been a
problem for even most v6p progs, and was intended to not be a problem
for Ruamoko progs.
This makes working with the plethora of types a little easier still. The
check for an algebra expression in field_expr needed to be moved up
because the struct code thought the algebra type was a normal vector.
And geometric algebra vectors. This does break things a little in GA,
but it does bring qfcc's C closer to standard C in that sizeof respects
the alignment of the type (very important for arrays).
It's implemented as the Hodge dual, which is probably reasonable until
people complain. Both ⋆ and ! are supported, though the former is a
little hard to see in Consola.
That was surprisingly harder than expected due to recursion and a
not-so-good implementation in expr_negate (it went too high-level thus
resulting in multivec expressions getting to the code generator).
It turned out they were always using floats for the source type (meaning
doubles were broken), and not shifting the component in the final sizzle
code meaning all swizzles were ?xxx (neglecting minus or 0). I'd make
tests, but I plan on modifying the instruction set a little bit.
This makes working with them much easier, and the type system reflects
what's in the multi-vector. Unfortunately, that does mean that large
algebras will wind up having a LOT of types, but it allows for efficient
storage of sparse multi-vectors:
auto v = 4*(e1 + e032 + e123);
results in:
0005 0213 1:0008<00000008>4:void 0:0000<00000000>?:invalid
0:0044<00000044>4:void assign (<void>), v
0006 0213 1:000c<0000000c>4:void 0:0000<00000000>?:invalid
0:0048<00000048>4:void assign (<void>), {v + 4}
Where the two source vectors are:
44:1 0 .imm float:18e [4, 0, 0, 0]
48:1 0 .imm float:1aa [4, 0, 0, 4]
They just happen to be adjacent, but don't need to be.
This gets only some very basics working:
* Algebra (multi-vector) types: eg @algebra(float(3,0,1)).
* Algebra scopes (using either the above or @algebra(TYPE_NAME) where
the above was used in a typedef.
* Basis blades (eg, e12) done via procedural symbols that evaluate to
suitable constants based on the basis group for the blade.
* Addition and subtraction of multi-vectors (only partially tested).
* Assignment of sub-algebra multi-vectors to full-algebra multi-vectors
(missing elements zeroed).
There's still much work to be done, but I thought it time to get
something into git.
Due to joys of pointers and the like, it's a bit of a bolt-on for now,
but it works nicely for basic math ops which is what I wanted, and the
code is generated from the expression.
I never liked it, but with C2x coming out, it's best to handle bools
properly. I haven't gone through all the uses of int as bool (I'll leave
that for fixing when I encounter them), but this gets QF working with
both c2x (really, gnu2x because of raw strings).
I think the current build_element_chain implementation does a reasonable
job, but I'm in the process of getting designated initializers working,
thus it will become important to ensure uninitialized members get
initialized.
The array access code was loading the vector, modifying the element,
then forgetting to write the modified vector back to whence it came.
However, that would be rather sub-optimal, so now when the vector is
accessed by a pointer, the array code switches to field access to get at
the vector element thus avoiding the need to copy the whole vector.
Arrays are passed as a pointer to the first element, so are always valid
parameters. Fixes a bogus "formal parameter N is too large to be passed
by value" error.
While swizzle does work, it requires the source to be properly aligned
and thus is not really the best choice. The extend instruction has no
alignment requirements (at all) and thus is much better suited to
converting a scalar to a vector type.
Fixes#30
The destination operand must be a full four component vector, but the
source can be smaller and small sources do not need to be aligned: the
offset of the source operand and the swizzle indices are adjusted. The
adjustments are done during final statement emission in order to avoid
confusing the data flow analyser (and that's when def offsets are known).
This came up when investigating an internal error from the line above.
It turned out the error was correct (problem with converting scalars to
vectors), but the break was not.
Currently, only vector/vec3 and quaternion/vec4 can be printed anyway,
but I plan on making explicit format strings for the types, so there
should be no need to promote any vector types (and really, any hidden
promotion is a bit of a pain, but standards...).
This allows all the tests to build and pass. I'll need to add tests to
ensure warnings happen when they should and that all vec operations are
correct (ouch, that'll be a lot of work), but vectors and quaternions
are working again.
I'd created new_value_expr some time ago, but never used it...
Also, replace convert_* with cast_expr to the appropriate type (removes
a pile of value check and create code).
Use with quaternions and vectors is a little broken in that
vec4/quaternion and vec3/vector are not the same types (by design) and
thus a cast is needed (not what I want, though). However, creating
vectors (that happen to be int due to int constants) does seem to be
working nicely otherwise.
Nicely, I was able to reuse the generated conversion code used by the
progs engine to do the work in qfcc, just needed appropriate definitions
for the operand macros, and to set up the conversion code. Helped
greatly by the new value load/store functions.
In working with vectors and matrices while testing the scene wrappers, I
found that there was a fair bit of confusion about how large something
could be. Return values can be up to 32 words (but qfcc wasn't aware of
that), parameters were limited to 4 words still (and possibly should be
for varargs), and temp defs were limited to 8 words (1 lvec4). Temps are
used for handling return values (at least when not optimizing) and thus
must be capable of holding a return value, and passing large arguments
through *formal* parameters should be allowed. It seems reasonable to
limit parameter sizes to return value sizes.
A temp and a move are still used for large return values (4x4 matrix),
but that's an optimization issue: the code itself is at least correct.
This is achieved by marking a void function with the void_return
attribute and then calling that function in an @return expression.
@return can be used only inside a void function and only with void
functions marked with the void_return attribute. As this is intended for
Objective-QC message forwarding, it is deliberately "difficult" to use
as returning a larger than expected value is unlikely to end well for
the calling function.
However, as a convenience, "@return nil" is allowed (in a void
function). It always returns an integer (which, of course,can be
interpreted as a pointer). This is safe because if the return value is
ignored, it will go into the progs return buffer, and if it is not
ignored, it is the smallest value that can be returned.
They're now properly part of the type system and can be used for
declaring variables, initialized (using {} block initializers), operated
on (=, *, + tested) though much work needs to be done on binary
expressions, and indexed. So far, only ivec2 has been tested.
Since address expressions always product a pointer type, aliasing one to
another pointer type is redundant. Instead, simply return an address
expression with the desired type.
This is the intended purpose of the offset field in address expressions,
and will make struct and array accesses more efficient when I sort out
the code generation side.
Ruamoko passes va_list (@args) through the ... parameter (as such), but
IMP uses ... to defeat parameter type and count checking and doesn't
want va_list. While possibly not the best solution, adding a no_va_list
flag to function types and skipping ex_args entirely does take care of
the problem without hard-coding anything specific to IMP.
The system currently just sets some bits in the type specifier (the
attribute list should probably be carried around with the specifier),
but it gets the job done for now, and at least gets things started.
Since Ruamoko progs must use lea to get the address of a local variable,
add use/def/kill references to the move instruction in order to inform
flow analysis of the variable since it is otherwise lost via the
resulting pointer (not an issue when direct var reference move can be
used).
The test and digging for the def can probably do with being more
aggressive, but this did nicely as a proof of concept.
Aliasing an alias expression to the same type as the original aliased
expression is a no-op, so drop the alias entirely in order to simplify
code generation.
Simply dereferencing a pointer does not need to go through array_expr
and thus collect a 0 offset that will only be constant-folded out again.
Really just a minor optimization in qfcc, but at one stage in today's
modification, it resulted in some unwanted aliasing chains.
While this does make the generated code a little worse, load is behaving
nicely), the two are at least consistent with each other and when I fix
one, I'll fix both. I missed this change the other day when I did the
address_expr cleanup. Yay near-duplicate code :P