No semantics yet, but qfcc can parse some of QF's shaders. The grammar
mostly follows that in the OpenGL Shading Language, Version 4.60.7 spec,
but with a few less tokens.
I need upstream flex for its line handling and nicer interface, but
debian's flex is ancient and stuck with the legacy interface, and I want
QF to be buildable on at least sid.
The syntax is not at all correct at this stage (really, just a copy of
Ruamoko), but the keyword table exists (in the wrong place) and the
additional basic types (bool, bvecN and (d)matNxM) have been added.
Boolean base type is currently just int, and matrices have 0 width while
I think about what to use, but finally some progress after several
months' hiatus.
There were a few places where some const-casts were needed, but they're
localized to code that's supposed to manipulate types (but I do want to
come up with something to clean that up).
I'm not sure the regressive product is right (overall sign), but that's
actually partly a problem in the math itself (duals and the regressive
product still get poked at, so it may be just a matter of
interpretation).
The expansion is necessary for the final test in preproc-2.r, but breaks
preproc-1.r because the closing ')' is *not* visible to collect_args
(its assumption is incorrect). This needs reworking (and probably
rethinking) of the entire macro argument collection, but I need a little
break from the preprocessor (and it's good enough for *most* uses), so
I'm adding the code (disabled) in order to avoid losing it and my notes
about the problem.
That is, if anything other than '(' (even a macro/argument that expands
to '(')is seen while checking for a function-type macro, the expansion
fails. This gets preproc-1.r working properly.
It really affected only collect_args, but in fixing that I noticed I was
very inconsistent with scanner's type (should be yyscan_t or void *, but
not yyscan_t *).
This gets most of the second preprocessor test working, apparently just
some problems with macro arguments not getting expanded for ## (unless
there's more lurking, of course, which I know there is for __LINE__).
It just feels cleaner than unnecessarily copying token chains. It turns
out that the core problem was just order of operations in next_token:
moving the pending_macro code to after arg/macro detection seems to be
correct (even bare `G LPAREN() 0)` is *not* expanding `G`, as expected).
It turned out I had simply forgotten to ensure the token chains were
properly terminated (the struct copy would copy the next of the source
token and thus macro args always expanded to the last token of the
parent macro). And then I'd missed saving the token text when parsing
predefined macros. __VA_OPT__ is still a problem, but this work was for
making that a little easier.
I got tired of the way the separate token types for macro expansion and
the rest of the preprocessor parser were handled. This makes them a
little more unified. Macro expansion seems to be slightly broken again
in that min/max/bound mess up badly, and __VA_OPT__ does things in the
wrong order, but I wanted to get this in as a checkpoint.
__VA_ARGS__ seems to be working but __VA_OPT__ still needs a lot of work
for dealing with its expansions, but basic error checking and simple
expansions seem to work.
Macros now store their arguments and have a cursor pointing to the next
token to take from their expansion list. While not checked yet, this
will make avoiding recursive macro invocations much easier. More
importantly, it's a step closer to correct argument expansion (though
token pasting is currently broken).
This makes it much easier to keep track of end of file in a conditional
block (#if...#endif) as #include in non-suppressed code would result in
spurious eof errors otherwise. I'm a little concerned about correctness,
but everything seems to work and it should be right as suppressed
include directives do not change the state at all, and the suppressed is
its own flag not in the condition stack.
The op code needs to be set just before being passed to the qc parser so
it doesn't get lost in macro expansion.
And vector values need to not be processed when recording otherwise they
get lost.
Allow 32-bit positive values without a warning and warn on conversion
issues for float. The whole conversion system needs cleaning up for
v6/v6p/ruamoko.
-D options weren't counting correctly so build_cpp_args was writing past
the end of the array allocated for command line arguments
parse_cpp_name had an out-by-one resulting in reading past the end of
the string.
The qfcc system include path was being set in the wrong place (not sure
why I thought that was right), and not respecting no_default_paths.
-M was generating preprocessor output when it should not have been,
resulting in corrupted dependency files.
This fixes the problem with // comments after the file in #include and
the core problem the complicated // handling tried to fix with
suppressed directives. Funny how it's always the simpler code that works
better :/
I'm undecided about @ in macro names, but treating @id as one token in
the body is necessary with the single-pass tokenizing. Fixes an infinite
macro expansion loop in vecaddr.r (`#define dot @dot`), but that's
really only a bandaid for *that* issue as there are plenty of other
cases where macros will loop.
It turns out that the recursive lexing was over-complicated as the
tokens for nested macros need to come from the expanded stream, not the
raw input stream.
Or at least mostly so. The __QFCC__ define isn't visible, and it seems
undef might not be working properly (ruamoko/lib/types.r doesn't
compile). Of course, there's still the issue of whether it's compiling
correctly.
In addition to cleaning up the old flex line rules, this improves
handling of the '# num "file" flags' from cpp to at least parse the
additional flags (support for the system header flag might come later,
but I doubt the extern-c flag will have much meaning).
QuakePascal has lost its line directive handling (no errors, but dead
rules) for now. Eventually the lexers will be merged.