I never liked it, but with C2x coming out, it's best to handle bools
properly. I haven't gone through all the uses of int as bool (I'll leave
that for fixing when I encounter them), but this gets QF working with
both c2x (really, gnu2x because of raw strings).
The warning flag check worked too well: it enabled the warning and
autoconf's default main wanted the const attribute. The bug has been
floating around for a while, it seems.
I'm not certain this is correct, but it seems to me that du-chains are
the same information as ud-chains, but from the defining statement's
point of view instead of that of the using statement.
I think the current build_element_chain implementation does a reasonable
job, but I'm in the process of getting designated initializers working,
thus it will become important to ensure uninitialized members get
initialized.
I never liked the various hacks I had come up with for representing
resource handles in Ruamoko. Structs with an int were awkward to test,
pointers and ints could be modified, etc etc. The new @handle keyword (@
used to keep handle free for use) works just like struct, union and
enum in syntax, but creates an opaque type suitable for a 32-bit handle.
The backing type is a function so v6 progs can use it without (all the
necessary opcodes exist) and no modifications were needed for
type-checking in binary expressions, but only assignment and comparisons
are supported, and (of course) nil. Tested using cbuf_t and QFile: seems
to work as desired.
I had considered 64-bit handles, but really, if more than 4G resource
objects are needed, I'm not sure QF can handle the game. However, that
limit is per resource manager, not total.
The first use will be pointer analysis for function arguments where the
argument points to an array to mark the array as live, but I'm sure
there'll be plenty of other uses.
Needed for proper analysis (ud-chains etc). Of course, it was then
necessary to remove the parameter defs from the uninitialized defs.
Also, plug a couple of memory leaks (forgot to free some temporary
sets).
I had messed up the handling of declarators for combinations of pointer,
function, and array: the pointer would get lost (and presumably arrays
of functions etc). I think I had gotten confused and thought things were
a tree rather than a simple list, but Holub set me straight once again
(I've never regretted getting that book). Once I understood that, it was
just a matter of finding all the places that needed to be fixed. Nicely,
most of the duplicated code has been refactored and should be easier to
debug in the future.
The type system rewrite had lost some of the checks for function fields.
This puts the actual code in the one place and covers parameters as well
as globals.
The symtab code itself cares only about global/not global for the size
of the hash table, but other code can use the symtab type for various
checks (eg, parameter shadowing).
Along with QuakeC's, of course. This fixes type typeredef2 test (a lot
of work for one little syntax error). Unfortunately, it came at the cost
of requiring `>>` in front of state expressions on C-style functions
(QuakeC-style functions are unaffected). Also, there are now two
shift/reduce conflicts with structs and unions (but these same conflicts
are in gcc 3.4).
This has highlighted the need for having the equivalent of the
expression tree for the declaration system as there are now several
hacks to deal with the separation of types and declarators. But that's a
job for another week.
The grammar constructs for declarations come from gcc 3.4's parser (I
think it's the last version of gcc that used bison. Also, 3.4 is still
GPL 2, so no chance of an issue there).
While swizzle does work, it requires the source to be properly aligned
and thus is not really the best choice. The extend instruction has no
alignment requirements (at all) and thus is much better suited to
converting a scalar to a vector type.
Fixes#30
As a class's ivars are built up by inheritance, but with only that
class's ivars in the symbol table, is is necessary to include an offset
based on the super class's ivars in order to ensure alignments are
respected. This is achieved via the new `base` parameter to
build_struct(), which is used to offset the current size while
calculating the aligned offset of the symbols. The parameter is ignored
for unions, as they always start at 0. The ivars for the current class
still have a base offset of 0 until they are actually added to the
class.
Fixes#29
The alignment is specified as a power of 2 (ie, actual alignment = 1 <<
alignment) allowing old object files to be compatible (as their
alignment is 0). This is necessary for (in part for #30) as it turned
out even global vectors were not aligned correctly.
Currently, only data spaces even vaguely respect alignment. This may
need to be fixed in the future.
Most were pretty easy and fairly logical, but gib's regex was a bit of a
pain until I figured out the real problem was the conditional
assignments.
However, libs/gamecode/test/test-conv4 fails when optimizing due to gcc
using vcvttps2dq (which is nice, actually) for vector forms, but not the
single equivalent other times. I haven't decided what to do with the
test (I might abandon it as it does seem to be UD).
The destination operand must be a full four component vector, but the
source can be smaller and small sources do not need to be aligned: the
offset of the source operand and the swizzle indices are adjusted. The
adjustments are done during final statement emission in order to avoid
confusing the data flow analyser (and that's when def offsets are known).
Having three very similar sets of code for outputting values (just for
debug purposes even) got to be a tad annoying. Now there's only one, and
in the right place, too (with the other value code).
I'd created new_value_expr some time ago, but never used it...
Also, replace convert_* with cast_expr to the appropriate type (removes
a pile of value check and create code).
Use with quaternions and vectors is a little broken in that
vec4/quaternion and vec3/vector are not the same types (by design) and
thus a cast is needed (not what I want, though). However, creating
vectors (that happen to be int due to int constants) does seem to be
working nicely otherwise.
Nicely, I was able to reuse the generated conversion code used by the
progs engine to do the work in qfcc, just needed appropriate definitions
for the operand macros, and to set up the conversion code. Helped
greatly by the new value load/store functions.
pr_type_t now contains only the one "value" field, and all the access
macros now use their PACKED variant for base access, making access to
larger types more consistent with the smaller types.
In working with vectors and matrices while testing the scene wrappers, I
found that there was a fair bit of confusion about how large something
could be. Return values can be up to 32 words (but qfcc wasn't aware of
that), parameters were limited to 4 words still (and possibly should be
for varargs), and temp defs were limited to 8 words (1 lvec4). Temps are
used for handling return values (at least when not optimizing) and thus
must be capable of holding a return value, and passing large arguments
through *formal* parameters should be allowed. It seems reasonable to
limit parameter sizes to return value sizes.
A temp and a move are still used for large return values (4x4 matrix),
but that's an optimization issue: the code itself is at least correct.
Currently only via pragma (not command line options), but I needed to
test the concept. Converting legacy code is just too error prone.
Telling the compiler how to treat the operator makes more sense. When *
acts as @dot with Ruamoko progs, the result is automatically aliased as
a float as this is the legacy meaning (ie, float result for dot
product).
This is achieved by marking a void function with the void_return
attribute and then calling that function in an @return expression.
@return can be used only inside a void function and only with void
functions marked with the void_return attribute. As this is intended for
Objective-QC message forwarding, it is deliberately "difficult" to use
as returning a larger than expected value is unlikely to end well for
the calling function.
However, as a convenience, "@return nil" is allowed (in a void
function). It always returns an integer (which, of course,can be
interpreted as a pointer). This is safe because if the return value is
ignored, it will go into the progs return buffer, and if it is not
ignored, it is the smallest value that can be returned.
Having to remember to copy yet another specifier bit was getting
tedious, so use a union of a struct with the bitfields and an unsigned
int to access them in parallel. Makes for a tidier spec_merge, and one
less headache.
The command line option works the same way as
--advanced/traditional/extended, as does the pragma. As well, raumoko
(alternative spelling) can be used because both are legitimate and some
people may prefer one spelling over the other.
As always, use of the pragma is at one's own risk: its intended use is
forcing the target in the unit tests.
dvec4, lvec4 and ulvec4 need to be aligned to 8 words (32 bytes) in
order to avoid hardware exceptions. Rather than dealing with possibly
mixed alignment when a function has 8-word aligned locals but only
4-word aligned parameters, simply keep the stack frame 8-word aligned at
all times.
As for sizes, the temp def recycler was written before the Ruamoko ISA
was even a pipe dream and thus never expected temp def sizes over 4. At
least now any future adjustments can be done in one place.
My quick and dirty test program works :)
dvec4 xy = {1d, 2d, 0d, 0.5};
void printf(string fmt, ...) = #0;
int main()
{
dvec4 u = {3, 4, 3.14};
dvec4 v = {3, 4, 0, 1};
dvec4 w = v * xy + u;
printf ("[%g, %g, %g, %g]\n", w[0], w[1], w[2], w[3]);
return 0;
}
They're now properly part of the type system and can be used for
declaring variables, initialized (using {} block initializers), operated
on (=, *, + tested) though much work needs to be done on binary
expressions, and indexed. So far, only ivec2 has been tested.
Ruamoko passes va_list (@args) through the ... parameter (as such), but
IMP uses ... to defeat parameter type and count checking and doesn't
want va_list. While possibly not the best solution, adding a no_va_list
flag to function types and skipping ex_args entirely does take care of
the problem without hard-coding anything specific to IMP.
The system currently just sets some bits in the type specifier (the
attribute list should probably be carried around with the specifier),
but it gets the job done for now, and at least gets things started.
This makes it much easier to check (and more robust to name changes),
allowing for effectively killing the node to which the variable being
addressed is attached. This fixes the incorrect address being used for
va_list, which is what caused double-alias to fail.
Since Ruamoko progs must use lea to get the address of a local variable,
add use/def/kill references to the move instruction in order to inform
flow analysis of the variable since it is otherwise lost via the
resulting pointer (not an issue when direct var reference move can be
used).
The test and digging for the def can probably do with being more
aggressive, but this did nicely as a proof of concept.
This is necessary to get statement disassembly working, and likely
debugging in general. locals is the total size of the stack frame and
thus reaches above the function-entry stack pointer, and params_start is
the local space relative start of the parameters. Thus, knowing the
function-entry stack pointer, the bottom of the locals space can be
found by subtracting params_start, and the top of the locals space by
adding (locals - params_start).
This gets all the sections of the progs file nicely aligned and the code
easier to read with the offset and size calculations not being spread
through the function. ivar-struct-return now works when compiled for
Ruamoko.
The goal was to get lea being used for locals in ruamoko progs because
lea takes the base registers into account while the constant pointer
defs used by v6p cannot. Pointer defs are still used for gobals as they
may be out of reach of 16-bit addressing.
address_expr() has been simplified in that it no longer takes an offset:
the vast majority of the callers never passed one, and the few that did
have been reworked to use other mechanisms. In particular,
offset_pointer_expr does the manipulations needed to add an offset
(unscaled by type size) to a pointer. High-level pointer offsets still
apply a scale, though.
Alias expressions now do a better job of hanling aliasing of aliases by
simply replacing the target type when possible.