Same issue as for the menus. But now I know why PR_LoadProgsFile is used
instead of PR_LoadProgs (at least for qwaq): avoidance of the gamedir
restriction (however, the menus are supposed to be restricted).
There's still some cleanup to do, but everything seems to be working
nicely: `make -j` works, `make distcheck` passes. There is probably
plenty of bitrot in the package directories (RPM, debian), though.
The vc project files have been removed since those versions are way out
of date and quakeforge is pretty much dependent on gcc now anyway.
Most of the old Makefile.am files are now Makemodule.am. This should
allow for new Makefile.am files that allow local building (to be added
on an as-needed bases). The current remaining Makefile.am files are for
standalone sub-projects.a
The installable bins are currently built in the top-level build
directory. This may change if the clutter gets to be too much.
While this does make a noticeable difference in build times, the main
reason for the switch was to take care of the growing dependency issues:
now it's possible to build tools for code generation (eg, using qfcc and
ruamoko programs for code-gen).
This fixes the segfault when loading the menu progs. I had forgotten
that the menu code doesn't use PR_LoadProgs (I don't remember why.
Obsolete reason?).
When I ported SEB to python, I discovered that I apparently didn't
really understand the paper's description of the end condition and the
usage of the affine and convex sets for center testing. This cleans up
the test and makes SEB more correct for the cases that have less than 4
supporting points (especially when there are less than 4 points total).
Returning a string was a bad idea as it makes str_str difficult to use
with str_mid. (actually, iirc, it was the only reason I moved all
strings into progs memory... hmm).
This should keep things nicely extensible, since additional data can be
done in the data space and found using defs. This gets the compilation
units into the sym file.
They worked well if there was only one source file in the test, but
failed if there were two or more. While only typelinker needed the
enhanced macros, I got them all because I generally copy the nearest
block when adding a new test thus it's best if they're all "correct".
The compilation unit stores the directory from which qfcc was run and
any source files mentioned. This is similar to dwarf's compilation unit.
Right now, this is the only data in the new debug space, but more might
come in the future so it seems best to treat the debug space separately
in the object files.
getcwd is assumed to use malloc if its buff param is null. This may need
fixing in the future, but it's in one spot. The result in "saved" in the
non-progs pool.
It never really helped sort out the path issues when using build
directories. It worked well enough for single directory projects, but
things got messy very quickly, especially when mixing ruamoko libs with
external progs. A better method based on dwarf is coming.
I decided that stopping in between function calls that are on the same
line is a good thing as it gives a chance to skip over the first but
step into the second.
This allows a debugger to do any symbol lookups and other preparations
between loading progs and the first code execution. .ctors are called as
per normal if debug_handler is not set.
While this does answer the question of how I'll go about restarting the
target progs (when I get to that point), it was required just to start
full-on ruamoko progs because .ctor was getting run in the main thread
and blocking due to trace.
In testing variable fw/precision in PR_Sprintf, I got a nasty reminder
of the limitations of the current progs ABI: passing @args to another QC
function does not work because the args list gets trampled but the
called function's locals. Thus, the need for a va_copy. It's not quite
the same as C's as it returns the destination args instead of copying
like memcpy, but it does copy the list from the source args to a
temporary buffer that is freed when the calling function returns.