That is, scale(scale(A,b),c) becomes scale(A,b*c), thus giving the
expression dag more opportunities to find common sub-expressions. My
fancy zero test is down to 20 total instructions (including overhead, or
16 for the actual algebra).
While splitting up the scaled vector into scaled xyz and scaled w does
cost an extra instruction, it allows for other optimizations to be
applied. For one, extends get all the way to the top now, and there are
at most two (in my test cases), thus either break-even or even a slight
reduction in instruction count. However, in the initial implementation,
I forgot to do the actual scaling, and 12 instructions were removed from
my fancy zero case, but real tests failed :P It looks like it's just
distributivity and commutativity holding things back (eg,
omega*gamma*sigma - gamma*omega*sigma: should be 0, but not recognized
as that).
This fixes the motor test :) It turns out that every lead I had
previously was due to the disabling of that feature "breaking" dags
(such that expressions wouldn't be found) and it was the dagged
multi-vector components getting linked by expr->next that made a mess of
things.
Or at least mostly so (there are a few casts). This doesn't fix the
motor bug, but I've wanted to do this for over twenty years and at least
I know what's not causing the bug. However, disabling fold_constants in
expr_algebra.c does "fix" things, so it's still a good place to look.
This doesn't fix the motor bug, but it doesn't make it worse. However,
it does simplify the trees quite a bit, so it should be easier to debug.
It seems the problem has something to do with fold_constants messing up
dagged aliases: in particular, const-folding multiplication by e0123 in
3d PGA as fold_constants sees it as 1.
I'm not yet sure what went wrong, but the introduction of dags broke
something in my set_transform function (perhaps the dual?), but it's
something to do with the symbol being dagged (I guess because it's
required for everything else to dag). However, the strangest thing is
the error shows up with 155a8cbcda which
is before dags had any direct effect on the geometric algebra code. I
have a sneaking suspicion it's yet another convert_name issue.
They should be treated as such only when merging vector components. This
fixes a bug that doesn't actually exist (it's in experimental code),
where the sum of two 3-component vectors was getting lost.
For cross products: remove any a from a×(...+/-a...)
For dot products: remove any a×b from a•(...+/-a×b...) (or b×a)
This removed another 2 instructions :)
They don't have much effect that I've noticed, but the expression dags
code does check for commutative expressions. The algebra code uses the
anticommutative flag for cross, wedge and subtract (unconditional at
this stage). Integer ops that are commutative are always commutative (or
anticommutative). Floating point ops can be controlled (default to non),
but no way to set the options currently.
This takes advantage of the expression dag to detect when an expression
is on both sides of a cross product (which always results in 0). This
removes 3 instructions from my motor test (28 to go).
Especially binary expressions. That expressions can now be reused is
what caused the need to make expression lists non-invasive: the reuse
resulted in loops in the lists. This doesn't directly affect code
generation at this stage but it will help with optimizing algebraic
expressions.
The dags are per sequence point (as per my reading of the C spec).
The removal of the e tag from expr_t necessitated making convert_name
return a new expressions which resulted in get_type no longer being
enough to both convert a name expression and get the type. This was just
another missed spot. With this, all of game-source except ctf builds.
Finally, that little e. is cleaned up. convert_name was a bit of a pain
(in that it relied on modifying the expression rather than returning a
new one, or more that such behavior was relied on).
Sum expressions pull the negation through extend expressions allowing
them to switch to subtraction when appropriate, and offset_cast reaches
past negation to check for extend expressions. This has eliminated all
negation-only instructions from the motor-point, shaving off more
instructions (now 27 not including the return).
I don't know why I didn't think to do it this way before, but simply
recursing into each operand for + or - expressions makes it much easier
to generate correct code. Fixes the motor-point test.
That is, passing int constants through ... in Ruamoko progs is no longer
a warning (still is for v6p and v6 progs). I got tired of getting the
warning for sizeof expressions when int through ... hasn't been a
problem for even most v6p progs, and was intended to not be a problem
for Ruamoko progs.
But really only for memset and memmove because they need to use an int
alias of the variable and it may be only that alias that sets a much
larger variable.
This removes all the special cases and thus it should be more robust. It
did show up some out-by-one (or a factor of two!) errors elsewhere in
the group mask calculations.
I'm uncertain about the names, but this makes it much easier to get at
specific subtypes (eg, PGA.tvec as a type) rather than having to know
the group mask.
This makes working with the plethora of types a little easier still. The
check for an algebra expression in field_expr needed to be moved up
because the struct code thought the algebra type was a normal vector.
And vis-versa.
I'm not sure what I was thinking, but I've decided that not being able
to cast the pseudo-scalar from float to double (for printf etc) was a
bug.