The unification of qc and c function symbol handling made it important
that the new symbol was a proper duplicate (minus being in a table) of
the old symbol. This fixes redeclared prototypes (especially for
qc-style functions, not encountered for c-style). Complete with unit
test :)
Other than contructors (and problems with the `out` block) qfcc can
compile fstrianglest.vert to what looks like working ruamoko code.
There's still a lot of work to do, though.
Using a struct with function pointers instead of switching on an enum
makes it much easier to add languages and, more importantly,
sub-languages like glsl's shader stage variants.
Simply referencing the original metafunc resulted in only the first
variant getting a def. Now my little test generates defs for all called
variants of a generic function.
However, I'm still not sure this is quite the direction I want to go
with making calls to generic functions, but I still need to figure out
defining them. I think making progress with the glsl front-end will
help.
Checking only the last function to be added results in false negatives
and thus duplicates when defining a generic function. eg:
genFType radians (genFType degrees);
genDType radians (genDType degrees);
genFType radians (genFType degrees) = #0;
genDType radians (genDType degrees) = #0;
Detecting generic functions needs to be done before finalizing the
function type for non-generic functions, otherwise the resulting
function type winds up being incorrect due to bogus resolution of the
return type (and probably a few other factors).
This takes care of handling the return type in function definitions as
well as declarations as
I don't know why I didn't apply the same ideas as in methoddef, maybe I
just forgot. I'm pretty sure I did methodproto first, or maybe cleaned
up methoddef much later then didn't think of methodproto. Still, the
grammar is much nicer to read now.
Arrays of functions or functions that return arrays or functions aren't
valid. While working on how to get generics in properly, I finally
understood what's going on with function types in the specifier, and I
think I'll be able to sort out function pointers vs prototypes, too.
Generic qc-style prototypes don't work but both c and qc style
definitions get as far as trying to build the builtin function. It
starting to look like I need to rework function handling, which isn't
all that surprising as it hasn't changed much over 22 years.
I never did like overloaded_function_t as a name, and with the
introduction of generic functions (or templates, I guess?) meta-function
makes more sense to me.
Most of them were noise from the type const correctness pass, but the
qc field function handling was always dubious (though in practice safe
due to how the type was built, but...). The remaining casts outside of
type.c need some thought.
I was investigating how qcc dealt with field fields and got segfaults
from qfprogs due to the lack of type encodings. While the resulting
output is not guaranteed to be correct (especially if trying to dump
progs compiled by some other extended compiler), at least it's better
than segfaulting or any other UB.
There's still the problem of defining implementations, but this gets a
lot of things working so long as the return type is one of the parameter
types rather than computed.
While much is broken, especially textures, e4m8.map successfully imports
into blender 4.2-ish, and even the entity relation lines seem to work
(which surprised me as those were the first sign of trouble).
And document type expressions to a certain extent. A lot of work needs
to be done as there's a lot of bitrot in the information in the doc (eg,
integer is now int, array declarations, etc).
A working call isn't produced yet, but the generic function does seem to
be selected correctly preferring a minimum of type promotions, though I
suspect I'll need to do more work on the selection process.
Mainly to prevent promotion between different vector and matrix sizes,
but also to be consistent in that double doesn't promote to itself (same
as other scalar types). Also fix up type_rows and type_cols for matrices
and make type_assignable, type_promotes, and type_compatible null-safe
(always false if either src or dst is null).
With this, genFType and genDType functions are now treated separately
and expanding to all components (single row or column matrices are not
supported (at this stage, anyway) for generic parameters).
That is, `@generic(...) { ... };`, which is handy for bulk declarations
(such as for glsl). This proved to be a lot harder than expected, I
suspect handling of specifiers needs a lot of work.
I never did like all those cryptic parameters, especially when they were
just 0 and 1. Now the individual specifier creation functions take only
those parameters they need (if any) and there's no possibility of
conflicting parameters. Also, it's clearer what the new specifier does.
It doesn't properly differentiate between (treats genDType as being the
same as genFType):
@generic(genFType=@vector(float)) genFType radians(genFType degrees);
@generic(genDType=@vector(double)) genDType radians(genDType degrees);
but this is due to problems with how the type is built from
@vector(float) and @vector(double). However, I thought it was about time
I got some of this into git.
Also, `@generic(...) { ... };` blocks don't work properly (they lose the
generic info): need to get a little smarter about handling generic scope
in `external_def_list`.
This fits in nicely with the rest of the generic type system and makes
it a little more useful. The idea is it will take a return type (already
does since type functions always require a parameter at this stage) and
a parameter list (not implemented yet). It currently resolves to the
basic void (...) function type for QC.
It turns out they're not needed as the difficulty of quakec function
declarations (function fields) is taken care of by qc_function_spec.
Also, they made it difficult to think about function declarations.
While they might be ok, I expect them to cause some issues when doing
compile-time evaluations of type expressions, so use of dags seems to be
a premature optimization. However, as the "no dags" flag is propagated
to parent expression nodes, it may prove useful in other contexts.
The expression grammar has been tidied up and some basic checks are made
of parameters to the type functions. Also, type parameters are looked up
so parsing now works properly. However, the type parameters are not used
correctly, so function generation doesn't work.
The semantics are only partially implemented (generic types not yet
generated), but the generic scope for function declarations seems to be
working as intended in that it gets inserted in the scope chain and
removed at the end of the declaration.
The end goal is to allow generic and/or template functions, but this
allows types to be specified parametrically, eg vectors of specific type
and width, with widths of one becoming scalars.
Matrices are currently completely broken as I haven't decided on how to
represent the columns (rows is represented by width (column-major
storage)), and bools are only partially supported (need to sort out
32-bit vs 64-bit bools).
No semantics yet, but qfcc can parse some of QF's shaders. The grammar
mostly follows that in the OpenGL Shading Language, Version 4.60.7 spec,
but with a few less tokens.
While it currently doesn't have any effect on generated code, it proved
to be necessary when experimenting with optimizing the operand of extend
expressions (which proved to produce worse code).
Along with some 0 -> nullptr changes.
The change to not split basic blocks on function calls resulted in the
@return def not being live and thus getting dropped when optimizing.
Marking the def as not local forces flow and dags to treat it as global
and thus forced it to be live.
This gets the types such that either there is only one definition, or C
sees the same name for what is essentially the same type despite there
being multiple local definitions.
I need upstream flex for its line handling and nicer interface, but
debian's flex is ancient and stuck with the legacy interface, and I want
QF to be buildable on at least sid.
The syntax is not at all correct at this stage (really, just a copy of
Ruamoko), but the keyword table exists (in the wrong place) and the
additional basic types (bool, bvecN and (d)matNxM) have been added.
Boolean base type is currently just int, and matrices have 0 width while
I think about what to use, but finally some progress after several
months' hiatus.
It's disabled by default because it's a runtime thing and I'm not sure I
want to keep it enabled, but it did find some issues (which I've cleaned
up), although it didn't find the problem I was looking for :P
This allows the dags code to optimize the return values, and when I make
the node killing by function calls less aggressive, should make for many
more potential CSE optimizations.
Offset casts are used heavily in geometric algebra, but doing an offset
cast on a dereferenced pointer (array index) doesn't work well for
assignments. Fixes yet another bug in my test scene :)
This takes care of ptrmove instructions, fixing the printf problem in
ptrstructinit. It might even make it possible to not break basic blocks
on function calls, which should make for some more optimization
opportunities, but that can come later.
For this to work properly, it was necessary to avoid converting alias
defs to st_alias nodes.
It also fixes ptrstructinit itself (at first, I hadn't noticed that the
test passed entirely).
It turns out the bug I was chasing in set_poses was not the data getting
lost or incorrect data being read, but the arguments to printf being set
from the parent pose `p` before `p` had actually been set.
This gets rid of the funny little self-loop edge in dags dot files that
has bothered me for a while. It was just because the node to which an
identifier was attached happened to be the parent of the identifier's
leaf node.
The fix in bdafdad0d5 for
`while (count--)` never did appeal to me. I think I understood the core
problem at the time, but I hadn't figured out how to use a var's
use/define sets to detect the write-before-read. Using them allows the
special handling for flow control to be removed, making things more
robust. The function call handling has been superfluous since the
Ruamoko instruction set required the auxiliary operands on the call
statements.
Not by much really, but using one "aux" count instead of different
types, and using functions to iterate through the statement's aux
operands makes the code easier to read.
Two birds with one stone: eliminates most of the problems with going
const-correct with expr_t, and it make dealing with internally generated
expressions at random locations much easier as the set source location
affects all new expressions created within that scope, to any depth.
Debug output is much easier to read now.
ptrmove was not treating its indirect source operand as used because the
pointer wasn't checked for its source.
This fixes part of ptrstructinit, but there's a lot more breakage in
that test: it looks like all sorts of fun with arrays.
There were a few places where some const-casts were needed, but they're
localized to code that's supposed to manipulate types (but I do want to
come up with something to clean that up).
When I implemented the st_alias handing (d8a78fc849) I was
unsure if I needed to unalias aliased defs too, but it turns out to have
been necessary: this is what caused my 2d PGA dynamics test to blow up
strangely due to the GA vector loaded from an array into a local
variable getting the local var replaced by a temp but the var itself
being read later in the code (uninitialized variable due to incorrect
optimization... oops).
When sum_expr gets a null expression as one of its args, it simply
returns the other arg, but that arg needs to have the correct type
applied.
Handle zero (null result) cross product in bivector geometric product.
Clean up types in bivector * vector geometric product.
I'm not sure the regressive product is right (overall sign), but that's
actually partly a problem in the math itself (duals and the regressive
product still get poked at, so it may be just a matter of
interpretation).
I'm not sure anything other than == or != has much meaning on anything
but scalars and pseudo scalars, but all comparisons are supported as a
simple boolean test. Any missing components are assumed to be 0. If
nothing else, it makes unit tests easier to write.
It turns out the algebra types make expression dag creation much more
difficult resulting in missed optimizations (eg, recognizing `a × a`).
This fixes the dead cross products in `⋆(s.B × ⋆s.B)`
The switch to using expression dags instead of trees meant that the
statement generator could traverse sub-expressions multiple times. This
is inefficient but usually ok if there are no side effects. However,
side effects and branches (usually from ?:, due to labels) break: side
effects happen more than once, and labels get emitted multiple times
resulting in orphaned statement blocks (and, in the end, uninitialized
temporaries).
Just running through the list of expressions in a block expression
results in label expressions within the block getting printed by
expressions that reference them and thus don't receive the correct next
pointer and wind up pointing to themselves. Printing the labels first
ensures they have the correct next pointer. However, I suspect there are
other ways things will get tangled.
I'm surprised it took almost two years to discover that I had no
quaternion multiplications in any test code, but getting an ICE for a
quaternion-vector product, and the Hadamard product for
quaternion-quaternion was a bit of a nasty surprise.
This makes a slight improvement to the commutator product in that it
removes the expand statement, but there's still the problem of (a+a)/2.
However, at least now the product is correct and slightly less abysmal.
This takes advantage of evaluate_constexpr to do all the work. Necessary
for use of basis blade constants in algebra contexts (avoids an internal
error).
This fixes the upostop-- test by auto-casting implicit constants to
unsigned (and it gives a warning for signed-unsigned comparisons
otherwise). The generated code isn't quite the best, but the fix for
that is next.
Also clean up the resulting mess, though not properly. There are a few
bogus warnings, and the legit ones could do with a review.
It's actually good enough for building qfcc: it fails a couple of
obscure c23 preprocessor examples that can be ignored for now (the rest
of QF builds without any apparent issue).
The expansion is necessary for the final test in preproc-2.r, but breaks
preproc-1.r because the closing ')' is *not* visible to collect_args
(its assumption is incorrect). This needs reworking (and probably
rethinking) of the entire macro argument collection, but I need a little
break from the preprocessor (and it's good enough for *most* uses), so
I'm adding the code (disabled) in order to avoid losing it and my notes
about the problem.
That is, if anything other than '(' (even a macro/argument that expands
to '(')is seen while checking for a function-type macro, the expansion
fails. This gets preproc-1.r working properly.
It really affected only collect_args, but in fixing that I noticed I was
very inconsistent with scanner's type (should be yyscan_t or void *, but
not yyscan_t *).