This gets the types such that either there is only one definition, or C
sees the same name for what is essentially the same type despite there
being multiple local definitions.
The syntax is not at all correct at this stage (really, just a copy of
Ruamoko), but the keyword table exists (in the wrong place) and the
additional basic types (bool, bvecN and (d)matNxM) have been added.
Boolean base type is currently just int, and matrices have 0 width while
I think about what to use, but finally some progress after several
months' hiatus.
This allows the dags code to optimize the return values, and when I make
the node killing by function calls less aggressive, should make for many
more potential CSE optimizations.
The fix in bdafdad0d5 for
`while (count--)` never did appeal to me. I think I understood the core
problem at the time, but I hadn't figured out how to use a var's
use/define sets to detect the write-before-read. Using them allows the
special handling for flow control to be removed, making things more
robust. The function call handling has been superfluous since the
Ruamoko instruction set required the auxiliary operands on the call
statements.
Two birds with one stone: eliminates most of the problems with going
const-correct with expr_t, and it make dealing with internally generated
expressions at random locations much easier as the set source location
affects all new expressions created within that scope, to any depth.
Debug output is much easier to read now.
There were a few places where some const-casts were needed, but they're
localized to code that's supposed to manipulate types (but I do want to
come up with something to clean that up).
I'm not sure the regressive product is right (overall sign), but that's
actually partly a problem in the math itself (duals and the regressive
product still get poked at, so it may be just a matter of
interpretation).
The switch to using expression dags instead of trees meant that the
statement generator could traverse sub-expressions multiple times. This
is inefficient but usually ok if there are no side effects. However,
side effects and branches (usually from ?:, due to labels) break: side
effects happen more than once, and labels get emitted multiple times
resulting in orphaned statement blocks (and, in the end, uninitialized
temporaries).
This makes a slight improvement to the commutator product in that it
removes the expand statement, but there's still the problem of (a+a)/2.
However, at least now the product is correct and slightly less abysmal.
There's no guarantee the source file is in a writable directory (in
fact, it is very definitely in a read-only directory when running
`make distcheck`). However, it is reasonable to assume the output file
is being written to a writable directory thus default the object file
directory to that of the output file, but still use the source file's
name for the object file name.
Fixes#51
It just feels cleaner than unnecessarily copying token chains. It turns
out that the core problem was just order of operations in next_token:
moving the pending_macro code to after arg/macro detection seems to be
correct (even bare `G LPAREN() 0)` is *not* expanding `G`, as expected).
I got tired of the way the separate token types for macro expansion and
the rest of the preprocessor parser were handled. This makes them a
little more unified. Macro expansion seems to be slightly broken again
in that min/max/bound mess up badly, and __VA_OPT__ does things in the
wrong order, but I wanted to get this in as a checkpoint.
__VA_ARGS__ seems to be working but __VA_OPT__ still needs a lot of work
for dealing with its expansions, but basic error checking and simple
expansions seem to work.
Macros now store their arguments and have a cursor pointing to the next
token to take from their expansion list. While not checked yet, this
will make avoiding recursive macro invocations much easier. More
importantly, it's a step closer to correct argument expansion (though
token pasting is currently broken).
-D options weren't counting correctly so build_cpp_args was writing past
the end of the array allocated for command line arguments
parse_cpp_name had an out-by-one resulting in reading past the end of
the string.
The qfcc system include path was being set in the wrong place (not sure
why I thought that was right), and not respecting no_default_paths.
-M was generating preprocessor output when it should not have been,
resulting in corrupted dependency files.
Or at least mostly so. The __QFCC__ define isn't visible, and it seems
undef might not be working properly (ruamoko/lib/types.r doesn't
compile). Of course, there's still the issue of whether it's compiling
correctly.
In addition to cleaning up the old flex line rules, this improves
handling of the '# num "file" flags' from cpp to at least parse the
additional flags (support for the system header flag might come later,
but I doubt the extern-c flag will have much meaning).
QuakePascal has lost its line directive handling (no errors, but dead
rules) for now. Eventually the lexers will be merged.
Really, function-type macros expand too, but incorrectly as the
parameters are not parsed and thus not expanded, but this gets the basic
handling implemented, including # and ## processing.
This will be used for unifying preprocessing and parsing, the idea being
that the tokens will be recorded for later expansion via macros, without
the need to retokenize.
It's now meant only for ALLOC. Interestingly, when DEBUG_QF_MEMORY is
defined in expr.c, something breaks badly with vkgen (no sniffles out of
valgrind, though), but everything is fine with it not defined. It seems
there may be some unpleasant UB going on somewhere.