The parameter defs are allocated from the parameter space using a
minimum alignment of 4, and varargs functions get a va_list struct in
place of the ...
An "args" expression is unconditionally injected into the call arguments
list at the place where ... is in the list, with arguments passed
through ... coming after the ...
Arguments get through to functions now, but there's problems with taking
the address of local variables: currently done using constant pointer
defs, which can't work for the base register addressing used in Ruamoko
progs.
With the update to test-bi's printf (and a hack to qfcc for lea),
triangle.r actually works, printing the expected results (but -1 instead
of 1 for equality, though that too is actually expected). qfcc will take
a bit longer because it seems there are some design issues in address
expressions (ambiguity, and a few other things) that have pretty much
always been there.
While all base registers can be used for any purpose at any time (this
is why the with instruction has hard-absolute modes: you can never get
permanently lost), qfcc currently uses the convention of register 0 for
globals and register 1 for stack locals (params, locals, function args).
The register used to access a def is stored in the def and that is used
to set the register bits in the instruction opcode.
The def code actually doesn't know anything about any conventions: it
assumes all defs are global for non-temp defs (the function code updates
the defs before emitting code) and the current function provides the
register to use for any temp defs allocated while emitting code.
Seems to work well, but debug is utterly messed up (not surprised, that
will be tricky).
Still need to get the base register index into the instructions, but I
think this is it for basic code generation. I should be able to start
testing Ruamoko properly fairly soon :)
Thanks to the use/def/kill lists attached to statements for pseudo-ops,
it turned out to be a lot easier to implement flow analysis (and thus
dags processing) than I expected. I suspect I should go back and make
the old call code use them too, and probably several other places, as
that will greatly simplify the edge setting.
The means that the actual call expression is not in the statement lint
of the enclosing block expression, but just its result, whether the call
is void or not. This actually simplifies several things, but most
importantly will make Ruamoko calls easier to implement.
The test is because I had some trouble with double-calls, and is how I
found the return-postop issue :P
Since Ruamoko now uses the stack for parameters and locals, parameters
need to come after locals in the address space (instead of before, as in
v6 progs). Thus use separate spaces for parameters and locals regardless
of the target, then stitch them together appropriately for the target.
The third space is used for allocating stack space for arguments to
called functions. It us not used for v6 progs, and comes before locals
in Ruamoko progs.
Other than the return value, and optimization (ice, not implemented)
calls in Ruamoko look like they'll work.
This seems to be the most reasonable approach to allocating space for
function call parameters without using push and pop (or adding to the
stack pointer), though it's probably good even when using push and pop
to help keep things aligned.
Operand width is encoded in the instruction opcode, so the width needs
to be accounted for in order to select the correct instruction. With
this, my little test generates correct code for the ruamoko ISA (except
for return, still fails).
long is ignored for double, and v6p progs are stuck with 32 bits for
longs (don't feel like extending v6p any further), but the basics are
there for Ruamoko.
short is ignored for ints because the minimum size is 32, and signed is
just noise for ints anyway (and no chars, so...).
unsigned, however, is finally implemented properly (or at least seems to
be working correctly: tests pass after getting things compiling again,
and lt.u is used where it should be :)
And other related fields so integer is now int (and uinteger is uint). I
really don't know why I went with integer in the first place, but this
will make using macros easier for dealing with types.
They are both gone, and pr_pointer_t is now pr_ptr_t (pointer may be a
little clearer than ptr, but ptr is consistent with things like intptr,
and keeps the type name short).
This includes calls and unconditional jumps, relative and through a
table. The parameters are all lumped into the one object, with some
being unused by the different types (eg, args and ret_type used only by
call expressions). Just having nice names for the parameters (instead of
e1 and e2) makes it nice, even with all the sub-types lumped together.
No mysterious type aliasing bugs this time ;)
The move operator names are definitely obsolete (due to dropping the
expressions a year or two ago) and the precedence checks seem to be
handled elsewhere. Memset and state expressions went away a while back
too.
While this was a pain to get working, that pain only went to prove the
value of using proper "types" (even if only an enum) for different
expression types: just finding all the places to edit was a chore, and
easy to make mistakes (forgetting bits here and there).
Strangely enough, this exposed a pile of *type* aliasing bugs (next
commit).
At this stage, I doubt emit.c will need to know the details of the
target (v6, v6p, ruamoko) since the instruction formats are identical,
just different meanings for the opcode itself.
While qfcc dealing sensibly with mixed target VMs in the object files
has always been an outstanding issue, with the new instruction set it
has become a priority. Most importantly, this should allow QF to
continue building while I work on qfcc targeting the new IS.
It does little good for documentation to refer to fields that don't
exist (because a certain someone forgot to change the docs when changing
the field names, I wonder who :P).
And partial implementations in qfcc (most places will generate an
internal error (not implemented) or segfault, but some low-hanging fruit
has already been implemented).
The opcode table is a nightmare to maintain, but this does clean it up
and speed up opcode lookups since they can now be indexed. Of course, it
turns out I had missed adding several instructions, so had to fix that,
and qfcc needed a bit of a re-jigger to get the opcode out of the table.
qfo_to_progs was modifying the space data pointers in the input qfo,
making it impossible to reuse the qfo. However, qfo_relocate_refs needs
the updated pointers, thus do a shallow copy of the qfo and its spaces
(but not any of the data)
I decided that the check for whether control reaches the end of the
function without performing some necessary action (eg, invoking
[super dealoc] in a derived -dealoc) is conceptually the return
statement using a pseudo operand and the necessary action defining that
pseudo operand and thus is the same as checking for uninitialised
variables. Thus, add a pseudo operand type and use one to represent the
invocation of [super alloc], with a special function to call when the
"used" pseudo operand is "uninitialised".
While I currently don't know what else pseudo operands could be used
for, the system should be flexible enough to add any check.
Fixes#24
I want to use the function's pseudo address that was used for managing
aliased temporary variables for other pseudo operands as well. The new
name seems to better reflect the variable's purpose even without the
other pseudo operands as temporary variables are, effectively, pseudo
operands until they are properly allocated.
Forgetting to invoke [super dealloc] in a derived class's -dealloc
method has caused me to waste far too much time chasing down the
resulting memory leaks and crashes. This is actually the main focus of
issue #24, but I want to take care of multiple paths before I consider
the issue to be done.
However, as a bonus, four cases were found :)
While get_selector does the job of getting a selector from a selector
reference expression, I have long considered lumping various expression
types under ex_expr to be a mistake. Not only is this a step towards
sorting that out, it will make working on #24 easier.
In order to correctly handle swap-style code
{ t = a; a = b; b = t; }
edges need to be created for each of the assignments moving an
identifier lable, but the dag must remain acyclic (the above example
wants to create a cycle). Having the reachable nodes recorded makes
checking for potential loops a quick operation.
There's still some cleanup to do, but everything seems to be working
nicely: `make -j` works, `make distcheck` passes. There is probably
plenty of bitrot in the package directories (RPM, debian), though.
The vc project files have been removed since those versions are way out
of date and quakeforge is pretty much dependent on gcc now anyway.
Most of the old Makefile.am files are now Makemodule.am. This should
allow for new Makefile.am files that allow local building (to be added
on an as-needed bases). The current remaining Makefile.am files are for
standalone sub-projects.a
The installable bins are currently built in the top-level build
directory. This may change if the clutter gets to be too much.
While this does make a noticeable difference in build times, the main
reason for the switch was to take care of the growing dependency issues:
now it's possible to build tools for code generation (eg, using qfcc and
ruamoko programs for code-gen).
The compilation unit stores the directory from which qfcc was run and
any source files mentioned. This is similar to dwarf's compilation unit.
Right now, this is the only data in the new debug space, but more might
come in the future so it seems best to treat the debug space separately
in the object files.
getcwd is assumed to use malloc if its buff param is null. This may need
fixing in the future, but it's in one spot. The result in "saved" in the
non-progs pool.
It never really helped sort out the path issues when using build
directories. It worked well enough for single directory projects, but
things got messy very quickly, especially when mixing ruamoko libs with
external progs. A better method based on dwarf is coming.
It's not connected up yet because I'm unsure of just where to put things
(it gets messy fast), but just being able to see the structure of
complex types is nice.
This eases type unaliasing on functions a little.
Still more to to go, but this fixes a really hair-pulling bug: linux's
heap randomiser was making the typedef test fail randomly whenever
typedef.qfo was compiled.
When a type is aliased, the alias has two type chains: the simple type
chain with all other aliases stripped, and the full type chain. There
are still plenty of bugs in it, but having the clean type chain takes
care of the major issue that was in the previous attempt as only the
head of the type-chain needs to be skipped for type comparison.
Most of the bugs are in finding the locations where the head needs to be
skipped.
All simple type checks are now done using is_* helper functions. This
will help hide the implementation details of the type system from the
rest of the compiler (especially the changes needed for type aliasing).
That is, those created by operand_address. The dag code needs the
expression that is attached to the statement to have the correct
expression type in order to do the right thing with the operands and
aliasing, especially when generating temps. This fixes assignchain when
optimizing (all tests pass again).
Now convert_nil only assigns the nil expression a type, and nil makes
its way down to the statement emission code (where it belongs, really).
Breaks even more things :)
It's not possible to take the address of constants (at this stage) and
trying to use a move instruction with .zero as source would result in
the VM complaining about null pointer access when bounds checking is on.
Thus, don't convert a nil source expression until it is known to be
safe, and use memset when it is not.
This fixes the problem of using the return value of a function as an
element in a compound initializer. The cause of the problem is that
compound initializers were represented by block expressions, but
function calls are contained within block expressions, so def
initialization saw the block expression and thought it was a nested
compound initializer.
Technically, it was a bug in the nested element parsing code in that it
wasn't checking the result value of the block expression, but using a
whole new expression type makes things much cleaner and the work done
paves the way for labeled initializers and compound assignments.
Multi-line calls (especially messages) got rather confusing to read as
the lines jumped back and forth. Now the binding is better but the dags
code is reordering the parameters sometimes.