I never liked the various hacks I had come up with for representing
resource handles in Ruamoko. Structs with an int were awkward to test,
pointers and ints could be modified, etc etc. The new @handle keyword (@
used to keep handle free for use) works just like struct, union and
enum in syntax, but creates an opaque type suitable for a 32-bit handle.
The backing type is a function so v6 progs can use it without (all the
necessary opcodes exist) and no modifications were needed for
type-checking in binary expressions, but only assignment and comparisons
are supported, and (of course) nil. Tested using cbuf_t and QFile: seems
to work as desired.
I had considered 64-bit handles, but really, if more than 4G resource
objects are needed, I'm not sure QF can handle the game. However, that
limit is per resource manager, not total.
While it does get a bit cluttered currently, being able to see the
contents of structures makes a huge difference. Also highlights that
vector immediates do not get the correct type encodings.
pr_type_t now contains only the one "value" field, and all the access
macros now use their PACKED variant for base access, making access to
larger types more consistent with the smaller types.
It seems clang defaults to unsigned for enums. Interestingly, gcc was ok
with the checks being either way. I guess gcc treats enums that *can* be
unsigned as DWIM.
Thanks to the size of the type encoding being explicit in the encoding,
anything that tries to read the encodings without expecting the width
will simply skip over the width, as it is placed after the ev type in
the encoding.
Any code that needs to read both the old encodings and the new can check
the size of the basic encodings to see if the width field is present.
This cleans up dprograms_t, making it easier to read and see what chunks
are in it (I was surprised to see only 6, the explicit pairs made it
seem to have more).
And other related fields so integer is now int (and uinteger is uint). I
really don't know why I went with integer in the first place, but this
will make using macros easier for dealing with types.
They are both gone, and pr_pointer_t is now pr_ptr_t (pointer may be a
little clearer than ptr, but ptr is consistent with things like intptr,
and keeps the type name short).
It now takes a context pointer (opaque data) that holds the buffers it
uses for the temporary strings. If the context pointer is null, a static
context is used (making those uses of va NOT thread-safe). Most calls to
va use the static context, but all such calls have been formatted
consistently so they are easy to find when it comes time to do a full
audit.
There's still some cleanup to do, but everything seems to be working
nicely: `make -j` works, `make distcheck` passes. There is probably
plenty of bitrot in the package directories (RPM, debian), though.
The vc project files have been removed since those versions are way out
of date and quakeforge is pretty much dependent on gcc now anyway.
Most of the old Makefile.am files are now Makemodule.am. This should
allow for new Makefile.am files that allow local building (to be added
on an as-needed bases). The current remaining Makefile.am files are for
standalone sub-projects.a
The installable bins are currently built in the top-level build
directory. This may change if the clutter gets to be too much.
While this does make a noticeable difference in build times, the main
reason for the switch was to take care of the growing dependency issues:
now it's possible to build tools for code generation (eg, using qfcc and
ruamoko programs for code-gen).
When a type is aliased, the alias has two type chains: the simple type
chain with all other aliases stripped, and the full type chain. There
are still plenty of bugs in it, but having the clean type chain takes
care of the major issue that was in the previous attempt as only the
head of the type-chain needs to be skipped for type comparison.
Most of the bugs are in finding the locations where the head needs to be
skipped.
It proved to be too fragile in its current implementation. It broke
pointers to incomplete structs and switch enum checking, and getting it
to work for other things was overly invasive. I still want the encoding,
but need to come up with something more robust.a
For technical reasons (programmer laziness), qfcc does not fix up local
def type encodings when writing the debug symbols file (type encoding
location not readily accessible).
The debug subsystem now uses the resources system to ensure it cleans
up, and its data is now semi-private. Unfortunately, PR_LoadDebug had to
remain public for qfprogs because using PR_RunLoadFuncs would cause
builtin resolution to complain.
This far better reflects the actual meaning. It is very likely that
ty_none is a holdover from long before there was full type encoding and
it meant that the union in qfcc's type_t had no data. This is still
true for basic types, but only if not a function, field or pointer type.
If the type was function, field or pointer, it was not true, so it was
misnamed pretty much from the start.
The encoding is 3:5 giving 3 bits for alignment (log2) and 5 bits for
size, with alignment in the 3 most significant bits. This keeps the
format backwards compatible as until doubles were added, all types were
aligned to 1 word which gets encoded as 0, and the size is unaffected.
It is necessary to know if a def is a function parameter so it can be
treated as initialized by the flow analyzer. The support for the flag in
object files is, at this stage, purely for debugging purposes.