There are problems with symbol lookup (eg, generic type names, image
operand names) but the system seems to be working: texelFetch ->
OpImageFetch (which uses explicit arguments even though it doesn't need
to) seems to set the arguments to OpImageFetch correctly.
I won't say it belongs in glsl-builtins, though the glsl-specific stuff
probably does (glsl texture handling is a mess). Also adds sampler
attribute handling (which falls back to image when necessary).
If `@intrinsic()` is followed by `[expr_list]` then those expresses will
be used to create the intrinsic rather than the function's parameters,
allowing for reordering, adding extra parameters or even complex
expressions.
However, only the parsing is implemented.
Unfortunately, this require using different syntax for the two cases:
type.attr works in cases where types are expected, but not in
expressions (lots of shift/reduce and reduce/reduce conflicts). However,
treating type like an Objective-C class works nicely, though
`[type attrib(params)]` looks a little odd. However, this allows using
generic types to provide function calls (eg, converting texture
coordinates).
I guess I hadn't thought of it because GLSL doesn't have `auto`, but
since the builtin functions are implemented in Ruamoko, `auto` is
available and I use it for the atomic image functions.
The goal is to make it easy to get size/coord/base types from image
types without creating a zillion type functions.
It was necessary to make it possible for any type to have an attribute
function (returns an expression so it can be more useful: types are
returned via type expressions). Algebra types were the first victim
(which was nice for testing).
And disable it for GLSL. I might need to make it target-dependent
instead as I don't know if I'll be able to implement it in SPIR-V, and
when I get to the C back-end, it will be superfluous.
I had already implemented the code generation side (though using type
ids instead of encodings is a nice change), but I hadn't implemented the
actual evaluation or even called it. Now return types can be computed
from generic parameters (eg, ivecN from vecN).
It's still not great (mostly on the language side, I think), but
different glsl shader types get the correct model and fragment shaders
even get the correct mode.
Access qualifiers aren't supported yet, and some of the image handling
is a bit messy (currently tied to glsl when it should be general), but
the basics are there. However, now I've got to sort out the execution
model (need Fragment or GLCompute for ImplicitLod).
I really don't like the way they're included (I'm really looking forward
to #embed, but gotta wait for gcc 15), and I'm a tad grumpy that the
documentation for them
(https://registry.khronos.org/SPIR-V/specs/unified1/MachineReadableGrammar.html)
is wrong (missing fields), but I think I like the result.
The grammars (core and glsl.std.450) are parsed into structs that should
be fairly easy to interpret: the instructions, kinds, and enumerant
values are sorted by name for search with bsearch. Having the data
parsed in means source code can refer to the items by name rather than
magic numbers, which will be very nice for intrinsics and image types
(and probably a few other things).
The idea is to allow certain contexts to interpret something like `2D`
as an identifier instead of 2.0 (double), or (not sure I'll go there,
5e12 as a bivector instead of a double or float.
In the end, it does simplify things a lot, though it helped get function
pointers working at least a bit (they're not quite the same as C yet,
but I think that's mostly that functions can be struct fields).
They should now work in generic contexts, but the pressing need to work
on arrays was due to constant expressions for element counts breaking.
As a side effect, function pointers are now a thing (and seem to work
like they do in C)
GA expressions need to be optimized so things that should cancel out do,
and this requires everything to be dagged. Doing so in expr_process gets
most of the expressions, and then a few stragglers in proc_field. There
might still be some more, but my test scene compiles again.
The code is pretty lousy in that it assumes there's only one `return`
and it's the end of the function, and the generated code is even worse
(too many load/store ops in the spir-v), but it looks like it at least
works. It does pass validation.
They're buggy in that the defspaces for parameters and locals are
incorrect (they need to point to the calling scope's space). Also,
parameters are not yet hooked up correctly. However, errors (because I
need to allow casts from scalars to vectors) do get handled.
Because the symbol tables for generic functions are ephemeral (as such),
they need to be easily removed from the scope chain, it's easiest if
definitions are never added to them (instead, they get added to the
parent symbol table). This keeps handling of function declarations or
definitions and their parameter scopes simple as the function gets put
in the global scope still, and the parameter scope simply gets
reconnected to the global scope (really, the generic scope's parent)
when the parameter scope is popped within a generic scope.
The back-end support for renamed builtins (fte's = #0:name) was needed
for pascal functions because the internal name is now prefixed with an @
to allow the lvalue/rvalue selection of behavior for function symbols
xvalue symbols refer to two expressions: an lvalue and an rvalue. They
are meant to be used with xvalue expressions.
xvalue expressions are useful when a distinction must be made between
the behavior of something (eg, a pascal function symbol) must change
depending on whether it's an lvalue (assignment of the function's return
value) or an rvalue (a call to the function, especially when the
function takes no parameters).
Because the glsl front-end uses Ruamoko to compile its builtins, it
needs to switch languages, and the cleanest way to do so is to use a
context object that gets passed around. This removes not only the
current_language global, but also (as a bonus) any real references to
flex's scanner object (there's still a pointer in rua_ctx_t, but it's no
longer a parameter (which caused some pain in the change)).
The main goal was to make it possible to give generic functions
definitions (since the code would be very dependent on the actual
parameter types), but will also allow for inline functions. It also
helped move a lot of the back-end dependent code out of semantics
processing and almost completely (if not completely) out of the parser.
Possibly more importantly, it gets the dags flushing out of the parser,
which means such is now shared by all front-ends.
There's probably a lot of dead code in expr.c now, but that can be taken
care of another time.
I found a need to check for shifts separately (not sure it's the right
approach for that problem, though), and there are a few more math ops
than just +-*/.
Now both width and columns must match for an instruction to be selected.
Found a few errors in my opcode specs, and some minor goofs in the type
system (really just overthinking things when I added matrices).