The ud- and du-chains include known side-effects of the instructions and
thus depict a more accurate view of what operands an instruction uses or
defines. Fixes the arraylife2 test.
Like defs, a partial write should not define the whole temp. Thus, copy
the "don't visit main" behavior recently added to def_visit_all. Fixes
missing ud-chains for component-by-component assignments to temporary
vectors.
I'm not certain this is correct, but it seems to me that du-chains are
the same information as ud-chains, but from the defining statement's
point of view instead of that of the using statement.
As certain statements (in particular, function calls) can use additional
variables via pointer parameters, it's necessary to iterate ud-chain
building until the count stabilizes. This should make live variable
analysis much easier.
Lexing . as a single character makes it impossible to enter fractions.
Unfortunately, this means that . as dot product requires white space on
either side.
Currently only PGA(3) is supported, but that's because the parser is
rather simple (recursive descent with a lame lexer), but it works well
enough for playing with geometric algebra without having to recompile
every time.
Wrap the strtod, strtof, strtol, strtoul functions, supporting the end
pointer as well (if not nil, the int offset of the end pointer relative
to the string start is returned).
Also, str_unmutable creates a return string from a mutable string
(copying it).
Meaning some leaks have been plugged, and some useful functions added:
loading a file (avoids polluting progs memory), setting the single
character lexeme string, and getting the line number.
Segfaulting when trying to produce an error message doesn't help get the
message out. Sure, `obj_error (nil...)` is a bit of an abuse, but it
shouldn't segfault the engine.
I don't know what I was thinking when I checked for 0 count for resizing
the set. Attempting to add/remove 0 elements results in adding/removing
4G elements. Oops.
I think the current build_element_chain implementation does a reasonable
job, but I'm in the process of getting designated initializers working,
thus it will become important to ensure uninitialized members get
initialized.
set_while checks the iterator's current element membership and skips to
the first element with different membership. ie, if the current element
is in the set, then set_while returns the next element *not* in the set,
but if the current is not in the set, then set_while returns the next
element that *is* in the set. Rather handy for dealing with clusters of
set elements.
I never liked the various hacks I had come up with for representing
resource handles in Ruamoko. Structs with an int were awkward to test,
pointers and ints could be modified, etc etc. The new @handle keyword (@
used to keep handle free for use) works just like struct, union and
enum in syntax, but creates an opaque type suitable for a 32-bit handle.
The backing type is a function so v6 progs can use it without (all the
necessary opcodes exist) and no modifications were needed for
type-checking in binary expressions, but only assignment and comparisons
are supported, and (of course) nil. Tested using cbuf_t and QFile: seems
to work as desired.
I had considered 64-bit handles, but really, if more than 4G resource
objects are needed, I'm not sure QF can handle the game. However, that
limit is per resource manager, not total.
This gives the resultant point the correct sign. Though the projective
divide would take care of the sign, this makes reading the point a
little less confusing (still need to sort out automatic blade reversals
for the likes of e31).
And other graphics versions too, of course. A lot of it feels rather
like a hack, but I think the entire canvas/console setup and console
related event handling may need a rethink (it's not horribly wrong, but
it doesn't feel right), particularly the initial sizing and binding for
toggling the console.
As the dot product is a metric product, using the metric is vital to
getting the correct results. This fixes the calculation of the closest
point on a line to a point other than the origin (and a whole pile of
other issues, I imagine).
Now that arrays work well enough for this case, no point in having the
workarounds (other than they're actually faster, but I'd like to
optimize *that* sometime).
This takes advantage of the ud-chains to follow the trail of pointer
assignments looking for an address. This gets array element assignments
surviving across blocks when the array itself is passed to a function.
It doesn't help when the address of the element is taken though. I think
that's a dags problem and probably needs du-chains. Also, the ud-chain
creation should probably be done in two passes so the newly found
information can be recorded.
Def and kill are still handled in flow_analyze_statement, but this makes
call meta data more consistent between v6 and ruamoko progs, allowing
the statement use chain to be used for call argument analysis. It even
found a bug in the extraction of param counts from the call instruction.
I had missed the flowvar clearing for auxiliary use/def/kill operands.
It's possible it wasn't necessary at the time since the operands were
added just for dealloc checking, but there's every reason it could
become necessary.
The first use will be pointer analysis for function arguments where the
argument points to an array to mark the array as live, but I'm sure
there'll be plenty of other uses.
A partial write to a def should not define the whole def, thus
def_visit_all's overlap parameter now has a flag that prevents a visit
to the main def when accessing the def from an alias def. This prevents
a lot of spurious kills and defines in flow analysis.
The array access code was loading the vector, modifying the element,
then forgetting to write the modified vector back to whence it came.
However, that would be rather sub-optimal, so now when the vector is
accessed by a pointer, the array code switches to field access to get at
the vector element thus avoiding the need to copy the whole vector.
Needed for proper analysis (ud-chains etc). Of course, it was then
necessary to remove the parameter defs from the uninitialized defs.
Also, plug a couple of memory leaks (forgot to free some temporary
sets).
That is, `array + offset`. This actually works around the bug
highlighted by arraylife.r (because the array is explicitly used), but
is not a proper solution, so that test still fails of course. However,
with this, it's no longer necessary to use `&array[index]` instead of
`array + index`.
I could never remember what any of the numbers meant. While define is
still a little fuzzy (they're (pseudo)statement numbers), at least now
I'll always know that the numbers are the define set. Also, having the
flow address of the variable helps with understanding the reaching defs
output.
It seems that the optimizer keeps array assignments live when passing
the array as a pointer, but not when passing the address of an element.
Found when testing the following code:
BasisBlade *pga_blades[16] = {
blades[1], blades[2], blades[3], blades[4],
blades[7], blades[6], blades[5], blades[0],
blades[8], blades[9], blades[10], blades[15],
blades[14], blades[13], blades[12], blades[11],
};
BasisGroup *pga_groups[4] = {
[BasisGroup new:4 basis:&pga_blades[ 0]],
[BasisGroup new:4 basis:&pga_blades[ 4]],
[BasisGroup new:4 basis:&pga_blades[ 8]],
[BasisGroup new:4 basis:&pga_blades[12]],
};
Only the first element of pga_blades is being assigned in the optimized
code, but everything is correct when not optimizing.
Removed a bogus dependency from libQFecs, and fixed the order of ui
libraries. This takes care of some first-time make install issues.
Libtool needs the libraries to be specified in dependency order.