And disable it for GLSL. I might need to make it target-dependent
instead as I don't know if I'll be able to implement it in SPIR-V, and
when I get to the C back-end, it will be superfluous.
The idea is to allow certain contexts to interpret something like `2D`
as an identifier instead of 2.0 (double), or (not sure I'll go there,
5e12 as a bivector instead of a double or float.
Because the symbol tables for generic functions are ephemeral (as such),
they need to be easily removed from the scope chain, it's easiest if
definitions are never added to them (instead, they get added to the
parent symbol table). This keeps handling of function declarations or
definitions and their parameter scopes simple as the function gets put
in the global scope still, and the parameter scope simply gets
reconnected to the global scope (really, the generic scope's parent)
when the parameter scope is popped within a generic scope.
Because the glsl front-end uses Ruamoko to compile its builtins, it
needs to switch languages, and the cleanest way to do so is to use a
context object that gets passed around. This removes not only the
current_language global, but also (as a bonus) any real references to
flex's scanner object (there's still a pointer in rua_ctx_t, but it's no
longer a parameter (which caused some pain in the change)).
The main goal was to make it possible to give generic functions
definitions (since the code would be very dependent on the actual
parameter types), but will also allow for inline functions. It also
helped move a lot of the back-end dependent code out of semantics
processing and almost completely (if not completely) out of the parser.
Possibly more importantly, it gets the dags flushing out of the parser,
which means such is now shared by all front-ends.
There's probably a lot of dead code in expr.c now, but that can be taken
care of another time.
Now declarations can be deferred too, thus things like generic/template
and inline functions should be possible. However, the most important
thing is this is a step towards a cleaner middle layer for compilation,
separating front-end language from back-end code-gen.
Attributes seem appropriate as GLSL's qualifiers affect variables rather
than types (since there's no typedef).
Not much is done with the attributes yet other than some basic error
checking (duplicates of non-layout attributes) and debug output, but
most (if not all) declarations get to the declaration code with
attributes intact.
The version directive really does only some error checking, and
only GL_EXT_multiview and GL_GOOGLE_include_directive are supported for
extensions, but enable/disable work (but not yet warn for multiview).
Using a struct with function pointers instead of switching on an enum
makes it much easier to add languages and, more importantly,
sub-languages like glsl's shader stage variants.
The end goal is to allow generic and/or template functions, but this
allows types to be specified parametrically, eg vectors of specific type
and width, with widths of one becoming scalars.
Matrices are currently completely broken as I haven't decided on how to
represent the columns (rows is represented by width (column-major
storage)), and bools are only partially supported (need to sort out
32-bit vs 64-bit bools).
No semantics yet, but qfcc can parse some of QF's shaders. The grammar
mostly follows that in the OpenGL Shading Language, Version 4.60.7 spec,
but with a few less tokens.
This gets the types such that either there is only one definition, or C
sees the same name for what is essentially the same type despite there
being multiple local definitions.
It just feels cleaner than unnecessarily copying token chains. It turns
out that the core problem was just order of operations in next_token:
moving the pending_macro code to after arg/macro detection seems to be
correct (even bare `G LPAREN() 0)` is *not* expanding `G`, as expected).
I got tired of the way the separate token types for macro expansion and
the rest of the preprocessor parser were handled. This makes them a
little more unified. Macro expansion seems to be slightly broken again
in that min/max/bound mess up badly, and __VA_OPT__ does things in the
wrong order, but I wanted to get this in as a checkpoint.
__VA_ARGS__ seems to be working but __VA_OPT__ still needs a lot of work
for dealing with its expansions, but basic error checking and simple
expansions seem to work.
Macros now store their arguments and have a cursor pointing to the next
token to take from their expansion list. While not checked yet, this
will make avoiding recursive macro invocations much easier. More
importantly, it's a step closer to correct argument expansion (though
token pasting is currently broken).
Or at least mostly so. The __QFCC__ define isn't visible, and it seems
undef might not be working properly (ruamoko/lib/types.r doesn't
compile). Of course, there's still the issue of whether it's compiling
correctly.
Really, function-type macros expand too, but incorrectly as the
parameters are not parsed and thus not expanded, but this gets the basic
handling implemented, including # and ## processing.
This will be used for unifying preprocessing and parsing, the idea being
that the tokens will be recorded for later expansion via macros, without
the need to retokenize.