This reverts commit 2904c619c1.
In order to support swizzle operations, I need to be able to alias defs
to larger types (eg, float to vec4), but alias_def rightly won't allow
this. However, as the plan is to do this in the final steps before
emitting the instruction, I plan on creating an alias to a float then
adjusting the type in the alias, but to do so without extra shenanigans,
I need alias_def to allow aliases to the same type. As a fringe benefit,
it makes the code agree with the comment in def.h :P
I'd created new_value_expr some time ago, but never used it...
Also, replace convert_* with cast_expr to the appropriate type (removes
a pile of value check and create code).
pr_type_t now contains only the one "value" field, and all the access
macros now use their PACKED variant for base access, making access to
larger types more consistent with the smaller types.
dvec4, lvec4 and ulvec4 need to be aligned to 8 words (32 bytes) in
order to avoid hardware exceptions. Rather than dealing with possibly
mixed alignment when a function has 8-word aligned locals but only
4-word aligned parameters, simply keep the stack frame 8-word aligned at
all times.
As for sizes, the temp def recycler was written before the Ruamoko ISA
was even a pipe dream and thus never expected temp def sizes over 4. At
least now any future adjustments can be done in one place.
My quick and dirty test program works :)
dvec4 xy = {1d, 2d, 0d, 0.5};
void printf(string fmt, ...) = #0;
int main()
{
dvec4 u = {3, 4, 3.14};
dvec4 v = {3, 4, 0, 1};
dvec4 w = v * xy + u;
printf ("[%g, %g, %g, %g]\n", w[0], w[1], w[2], w[3]);
return 0;
}
They're now properly part of the type system and can be used for
declaring variables, initialized (using {} block initializers), operated
on (=, *, + tested) though much work needs to be done on binary
expressions, and indexed. So far, only ivec2 has been tested.
While all base registers can be used for any purpose at any time (this
is why the with instruction has hard-absolute modes: you can never get
permanently lost), qfcc currently uses the convention of register 0 for
globals and register 1 for stack locals (params, locals, function args).
The register used to access a def is stored in the def and that is used
to set the register bits in the instruction opcode.
The def code actually doesn't know anything about any conventions: it
assumes all defs are global for non-temp defs (the function code updates
the defs before emitting code) and the current function provides the
register to use for any temp defs allocated while emitting code.
Seems to work well, but debug is utterly messed up (not surprised, that
will be tricky).
Since Ruamoko now uses the stack for parameters and locals, parameters
need to come after locals in the address space (instead of before, as in
v6 progs). Thus use separate spaces for parameters and locals regardless
of the target, then stitch them together appropriately for the target.
The third space is used for allocating stack space for arguments to
called functions. It us not used for v6 progs, and comes before locals
in Ruamoko progs.
Other than the return value, and optimization (ice, not implemented)
calls in Ruamoko look like they'll work.
And other related fields so integer is now int (and uinteger is uint). I
really don't know why I went with integer in the first place, but this
will make using macros easier for dealing with types.
While this was a pain to get working, that pain only went to prove the
value of using proper "types" (even if only an enum) for different
expression types: just finding all the places to edit was a chore, and
easy to make mistakes (forgetting bits here and there).
Strangely enough, this exposed a pile of *type* aliasing bugs (next
commit).
It now takes a context pointer (opaque data) that holds the buffers it
uses for the temporary strings. If the context pointer is null, a static
context is used (making those uses of va NOT thread-safe). Most calls to
va use the static context, but all such calls have been formatted
consistently so they are easy to find when it comes time to do a full
audit.
There's still some cleanup to do, but everything seems to be working
nicely: `make -j` works, `make distcheck` passes. There is probably
plenty of bitrot in the package directories (RPM, debian), though.
The vc project files have been removed since those versions are way out
of date and quakeforge is pretty much dependent on gcc now anyway.
Most of the old Makefile.am files are now Makemodule.am. This should
allow for new Makefile.am files that allow local building (to be added
on an as-needed bases). The current remaining Makefile.am files are for
standalone sub-projects.a
The installable bins are currently built in the top-level build
directory. This may change if the clutter gets to be too much.
While this does make a noticeable difference in build times, the main
reason for the switch was to take care of the growing dependency issues:
now it's possible to build tools for code generation (eg, using qfcc and
ruamoko programs for code-gen).
All simple type checks are now done using is_* helper functions. This
will help hide the implementation details of the type system from the
rest of the compiler (especially the changes needed for type aliasing).
Now convert_nil only assigns the nil expression a type, and nil makes
its way down to the statement emission code (where it belongs, really).
Breaks even more things :)
This fixes the problem of using the return value of a function as an
element in a compound initializer. The cause of the problem is that
compound initializers were represented by block expressions, but
function calls are contained within block expressions, so def
initialization saw the block expression and thought it was a nested
compound initializer.
Technically, it was a bug in the nested element parsing code in that it
wasn't checking the result value of the block expression, but using a
whole new expression type makes things much cleaner and the work done
paves the way for labeled initializers and compound assignments.
The end goal was to fix erroneous non-constant initializer errors for
the following (ie, nested initializer blocks):
typedef struct { int x; int y; } Point;
typedef struct { int width; int height; } Extent;
typedef struct Rect_s { Point offset; Extent extent; } Rect;
Rect makeRect (int xpos, int ypos, int xlen, int ylen)
{
Rect rect = {{xpos, ypos}, {xlen, ylen}};
return rect;
}
However, it turned out that nested initializer blocks for local
variables did not work at all in that the relocations were lost because
fake defs were being created for the generated instructions.
Thus, instead of creating fake defs, simply record the offset relative
to the base def, the type, and the basic type initializer expression,
then generate instructions that all refer to the correct def but with a
relative offset.
Other than using the new element system, static initializers are largely
unaffected.
It proved to be too fragile in its current implementation. It broke
pointers to incomplete structs and switch enum checking, and getting it
to work for other things was overly invasive. I still want the encoding,
but need to come up with something more robust.a
Attempting to define a variable with an incomplete type is an error, and
results in a default size 1 of allocated, but I forgot to set default
alignment when implementing alignment.
This is for modern code. Traditional code still treats initialized
globals as constant and nosave. This will make a bit of a mess of
modern code that expects traditional behavior.