The destination operand must be a full four component vector, but the
source can be smaller and small sources do not need to be aligned: the
offset of the source operand and the swizzle indices are adjusted. The
adjustments are done during final statement emission in order to avoid
confusing the data flow analyser (and that's when def offsets are known).
Having three very similar sets of code for outputting values (just for
debug purposes even) got to be a tad annoying. Now there's only one, and
in the right place, too (with the other value code).
Use with quaternions and vectors is a little broken in that
vec4/quaternion and vec3/vector are not the same types (by design) and
thus a cast is needed (not what I want, though). However, creating
vectors (that happen to be int due to int constants) does seem to be
working nicely otherwise.
It seems clang defaults to unsigned for enums. Interestingly, gcc was ok
with the checks being either way. I guess gcc treats enums that *can* be
unsigned as DWIM.
With explicit operators, even. While they're a tad verbose, they're at
least unambiguous and most importantly have the right precedence (or at
least adjustable precedence if I got it wrong, but vector ops having
high precedence than scalar or component seems reasonable to me).
It's possible I lost the child printing when creating the return
expressions, but dot diagrams are much more useful when they don't have
nodes with just pointer values.
The parameter defs are allocated from the parameter space using a
minimum alignment of 4, and varargs functions get a va_list struct in
place of the ...
An "args" expression is unconditionally injected into the call arguments
list at the place where ... is in the list, with arguments passed
through ... coming after the ...
Arguments get through to functions now, but there's problems with taking
the address of local variables: currently done using constant pointer
defs, which can't work for the base register addressing used in Ruamoko
progs.
With the update to test-bi's printf (and a hack to qfcc for lea),
triangle.r actually works, printing the expected results (but -1 instead
of 1 for equality, though that too is actually expected). qfcc will take
a bit longer because it seems there are some design issues in address
expressions (ambiguity, and a few other things) that have pretty much
always been there.
And other related fields so integer is now int (and uinteger is uint). I
really don't know why I went with integer in the first place, but this
will make using macros easier for dealing with types.
This includes calls and unconditional jumps, relative and through a
table. The parameters are all lumped into the one object, with some
being unused by the different types (eg, args and ret_type used only by
call expressions). Just having nice names for the parameters (instead of
e1 and e2) makes it nice, even with all the sub-types lumped together.
No mysterious type aliasing bugs this time ;)
The move operator names are definitely obsolete (due to dropping the
expressions a year or two ago) and the precedence checks seem to be
handled elsewhere. Memset and state expressions went away a while back
too.
While this was a pain to get working, that pain only went to prove the
value of using proper "types" (even if only an enum) for different
expression types: just finding all the places to edit was a chore, and
easy to make mistakes (forgetting bits here and there).
Strangely enough, this exposed a pile of *type* aliasing bugs (next
commit).
And partial implementations in qfcc (most places will generate an
internal error (not implemented) or segfault, but some low-hanging fruit
has already been implemented).
While get_selector does the job of getting a selector from a selector
reference expression, I have long considered lumping various expression
types under ex_expr to be a mistake. Not only is this a step towards
sorting that out, it will make working on #24 easier.
It now takes a context pointer (opaque data) that holds the buffers it
uses for the temporary strings. If the context pointer is null, a static
context is used (making those uses of va NOT thread-safe). Most calls to
va use the static context, but all such calls have been formatted
consistently so they are easy to find when it comes time to do a full
audit.
There's still some cleanup to do, but everything seems to be working
nicely: `make -j` works, `make distcheck` passes. There is probably
plenty of bitrot in the package directories (RPM, debian), though.
The vc project files have been removed since those versions are way out
of date and quakeforge is pretty much dependent on gcc now anyway.
Most of the old Makefile.am files are now Makemodule.am. This should
allow for new Makefile.am files that allow local building (to be added
on an as-needed bases). The current remaining Makefile.am files are for
standalone sub-projects.a
The installable bins are currently built in the top-level build
directory. This may change if the clutter gets to be too much.
While this does make a noticeable difference in build times, the main
reason for the switch was to take care of the growing dependency issues:
now it's possible to build tools for code generation (eg, using qfcc and
ruamoko programs for code-gen).
All simple type checks are now done using is_* helper functions. This
will help hide the implementation details of the type system from the
rest of the compiler (especially the changes needed for type aliasing).
Only as scalars, I still need to think about what to do for vectors and
quaternions due to param size issues. Also, doubles are not yet
guaranteed to be correctly aligned.
Currently, they can represent either vectors or quaternions, and the
quaternions can be in either [s, v] form or [w, x, y, z] form.
Many things will not actual work yet as the vector expression needs to be
converted into the appropriate form for assigning the elements to the
components of the "vector" type.
It turns out expression trees are (mostly?) valid DAGs, so all edges being
constrained works, though the graphs get a little tall (but easier to read).