Not by much really, but using one "aux" count instead of different
types, and using functions to iterate through the statement's aux
operands makes the code easier to read.
Two birds with one stone: eliminates most of the problems with going
const-correct with expr_t, and it make dealing with internally generated
expressions at random locations much easier as the set source location
affects all new expressions created within that scope, to any depth.
Debug output is much easier to read now.
ptrmove was not treating its indirect source operand as used because the
pointer wasn't checked for its source.
This fixes part of ptrstructinit, but there's a lot more breakage in
that test: it looks like all sorts of fun with arrays.
There were a few places where some const-casts were needed, but they're
localized to code that's supposed to manipulate types (but I do want to
come up with something to clean that up).
When I implemented the st_alias handing (d8a78fc849) I was
unsure if I needed to unalias aliased defs too, but it turns out to have
been necessary: this is what caused my 2d PGA dynamics test to blow up
strangely due to the GA vector loaded from an array into a local
variable getting the local var replaced by a temp but the var itself
being read later in the code (uninitialized variable due to incorrect
optimization... oops).
When sum_expr gets a null expression as one of its args, it simply
returns the other arg, but that arg needs to have the correct type
applied.
Handle zero (null result) cross product in bivector geometric product.
Clean up types in bivector * vector geometric product.
I'm not sure the regressive product is right (overall sign), but that's
actually partly a problem in the math itself (duals and the regressive
product still get poked at, so it may be just a matter of
interpretation).
I'm not sure anything other than == or != has much meaning on anything
but scalars and pseudo scalars, but all comparisons are supported as a
simple boolean test. Any missing components are assumed to be 0. If
nothing else, it makes unit tests easier to write.
It turns out the algebra types make expression dag creation much more
difficult resulting in missed optimizations (eg, recognizing `a × a`).
This fixes the dead cross products in `⋆(s.B × ⋆s.B)`
The switch to using expression dags instead of trees meant that the
statement generator could traverse sub-expressions multiple times. This
is inefficient but usually ok if there are no side effects. However,
side effects and branches (usually from ?:, due to labels) break: side
effects happen more than once, and labels get emitted multiple times
resulting in orphaned statement blocks (and, in the end, uninitialized
temporaries).
Just running through the list of expressions in a block expression
results in label expressions within the block getting printed by
expressions that reference them and thus don't receive the correct next
pointer and wind up pointing to themselves. Printing the labels first
ensures they have the correct next pointer. However, I suspect there are
other ways things will get tangled.
I'm surprised it took almost two years to discover that I had no
quaternion multiplications in any test code, but getting an ICE for a
quaternion-vector product, and the Hadamard product for
quaternion-quaternion was a bit of a nasty surprise.
This makes a slight improvement to the commutator product in that it
removes the expand statement, but there's still the problem of (a+a)/2.
However, at least now the product is correct and slightly less abysmal.
This takes advantage of evaluate_constexpr to do all the work. Necessary
for use of basis blade constants in algebra contexts (avoids an internal
error).
This fixes the upostop-- test by auto-casting implicit constants to
unsigned (and it gives a warning for signed-unsigned comparisons
otherwise). The generated code isn't quite the best, but the fix for
that is next.
Also clean up the resulting mess, though not properly. There are a few
bogus warnings, and the legit ones could do with a review.
It's actually good enough for building qfcc: it fails a couple of
obscure c23 preprocessor examples that can be ignored for now (the rest
of QF builds without any apparent issue).
The expansion is necessary for the final test in preproc-2.r, but breaks
preproc-1.r because the closing ')' is *not* visible to collect_args
(its assumption is incorrect). This needs reworking (and probably
rethinking) of the entire macro argument collection, but I need a little
break from the preprocessor (and it's good enough for *most* uses), so
I'm adding the code (disabled) in order to avoid losing it and my notes
about the problem.
That is, if anything other than '(' (even a macro/argument that expands
to '(')is seen while checking for a function-type macro, the expansion
fails. This gets preproc-1.r working properly.
It really affected only collect_args, but in fixing that I noticed I was
very inconsistent with scanner's type (should be yyscan_t or void *, but
not yyscan_t *).
There's no guarantee the source file is in a writable directory (in
fact, it is very definitely in a read-only directory when running
`make distcheck`). However, it is reasonable to assume the output file
is being written to a writable directory thus default the object file
directory to that of the output file, but still use the source file's
name for the object file name.
Fixes#51
This gets most of the second preprocessor test working, apparently just
some problems with macro arguments not getting expanded for ## (unless
there's more lurking, of course, which I know there is for __LINE__).
It just feels cleaner than unnecessarily copying token chains. It turns
out that the core problem was just order of operations in next_token:
moving the pending_macro code to after arg/macro detection seems to be
correct (even bare `G LPAREN() 0)` is *not* expanding `G`, as expected).
It turned out I had simply forgotten to ensure the token chains were
properly terminated (the struct copy would copy the next of the source
token and thus macro args always expanded to the last token of the
parent macro). And then I'd missed saving the token text when parsing
predefined macros. __VA_OPT__ is still a problem, but this work was for
making that a little easier.
I got tired of the way the separate token types for macro expansion and
the rest of the preprocessor parser were handled. This makes them a
little more unified. Macro expansion seems to be slightly broken again
in that min/max/bound mess up badly, and __VA_OPT__ does things in the
wrong order, but I wanted to get this in as a checkpoint.
__VA_ARGS__ seems to be working but __VA_OPT__ still needs a lot of work
for dealing with its expansions, but basic error checking and simple
expansions seem to work.
Macros now store their arguments and have a cursor pointing to the next
token to take from their expansion list. While not checked yet, this
will make avoiding recursive macro invocations much easier. More
importantly, it's a step closer to correct argument expansion (though
token pasting is currently broken).
This makes it much easier to keep track of end of file in a conditional
block (#if...#endif) as #include in non-suppressed code would result in
spurious eof errors otherwise. I'm a little concerned about correctness,
but everything seems to work and it should be right as suppressed
include directives do not change the state at all, and the suppressed is
its own flag not in the condition stack.
The op code needs to be set just before being passed to the qc parser so
it doesn't get lost in macro expansion.
And vector values need to not be processed when recording otherwise they
get lost.
I'm not sure what exactly caused vector literals to break, but bailing
out of the vector ops section on conversion to vector or quaternion
fixes game-source.
Allow 32-bit positive values without a warning and warn on conversion
issues for float. The whole conversion system needs cleaning up for
v6/v6p/ruamoko.
-D options weren't counting correctly so build_cpp_args was writing past
the end of the array allocated for command line arguments
parse_cpp_name had an out-by-one resulting in reading past the end of
the string.
The qfcc system include path was being set in the wrong place (not sure
why I thought that was right), and not respecting no_default_paths.
-M was generating preprocessor output when it should not have been,
resulting in corrupted dependency files.
I don't remember why I couldn't get this to work last time I tried, but
it went well this time, and made a significant difference to compiling
vulkan.r (from 1.1s to 0.15s unoptimized or 0.8s optimized).
This fixes the problem with // comments after the file in #include and
the core problem the complicated // handling tried to fix with
suppressed directives. Funny how it's always the simpler code that works
better :/
I'm undecided about @ in macro names, but treating @id as one token in
the body is necessary with the single-pass tokenizing. Fixes an infinite
macro expansion loop in vecaddr.r (`#define dot @dot`), but that's
really only a bandaid for *that* issue as there are plenty of other
cases where macros will loop.
It turns out that the recursive lexing was over-complicated as the
tokens for nested macros need to come from the expanded stream, not the
raw input stream.
cpp does this for us so the double-generation is redundant (and
currently wrong anyway with things like <built-in> and <command-line>
getting into the list of dependencies).
Or at least mostly so. The __QFCC__ define isn't visible, and it seems
undef might not be working properly (ruamoko/lib/types.r doesn't
compile). Of course, there's still the issue of whether it's compiling
correctly.
If ID gets to the preprocessor parser in expressions, the ID is not
defined because if it was defined, it would have been expanded. Thus,
all IDs are 0.
In addition to cleaning up the old flex line rules, this improves
handling of the '# num "file" flags' from cpp to at least parse the
additional flags (support for the system header flag might come later,
but I doubt the extern-c flag will have much meaning).
QuakePascal has lost its line directive handling (no errors, but dead
rules) for now. Eventually the lexers will be merged.