Missed this case in duplicate_type. Allows "short foo" and
"sizeof(short)" (even though qfcc and the engine have two ideas of the
size: I expect trouble later).
long is ignored for double, and v6p progs are stuck with 32 bits for
longs (don't feel like extending v6p any further), but the basics are
there for Ruamoko.
short is ignored for ints because the minimum size is 32, and signed is
just noise for ints anyway (and no chars, so...).
unsigned, however, is finally implemented properly (or at least seems to
be working correctly: tests pass after getting things compiling again,
and lt.u is used where it should be :)
Attempting to add ev_ushort caused ptraliasenc to break, but that was
because it was already broken: I had implemented the scan of the xdef
table incorrectly, thus adding only 1 ev type resulted in the walked
pointer being out of phase with its data due to it first passing over
the type encodings (which is why adding long and ulong didn't cause any
obvious trouble).
And other related fields so integer is now int (and uinteger is uint). I
really don't know why I went with integer in the first place, but this
will make using macros easier for dealing with types.
They are both gone, and pr_pointer_t is now pr_ptr_t (pointer may be a
little clearer than ptr, but ptr is consistent with things like intptr,
and keeps the type name short).
I don't know why they were ever signed (oversight at id and just
propagated?). Anyway, this resulted in "unsigned" spreading a bit, but
all to reasonable places.
This includes calls and unconditional jumps, relative and through a
table. The parameters are all lumped into the one object, with some
being unused by the different types (eg, args and ret_type used only by
call expressions). Just having nice names for the parameters (instead of
e1 and e2) makes it nice, even with all the sub-types lumped together.
No mysterious type aliasing bugs this time ;)
The move operator names are definitely obsolete (due to dropping the
expressions a year or two ago) and the precedence checks seem to be
handled elsewhere. Memset and state expressions went away a while back
too.
While this was a pain to get working, that pain only went to prove the
value of using proper "types" (even if only an enum) for different
expression types: just finding all the places to edit was a chore, and
easy to make mistakes (forgetting bits here and there).
Strangely enough, this exposed a pile of *type* aliasing bugs (next
commit).
v6 vs v6p are more or less as before, with ruamoko added in. qfcc will
now try (and fail, due to the opcode table opnames being wrong) to
create ruamoko progs when given the ruamoko target option.
At this stage, I doubt emit.c will need to know the details of the
target (v6, v6p, ruamoko) since the instruction formats are identical,
just different meanings for the opcode itself.
This allows v6, v6p (older QF VM) or ruamoko (new QF VM) to be targeted.
Currently defaults to v6p to allow QF to continue building without too
much hassle.
While qfcc dealing sensibly with mixed target VMs in the object files
has always been an outstanding issue, with the new instruction set it
has become a priority. Most importantly, this should allow QF to
continue building while I work on qfcc targeting the new IS.
It does little good for documentation to refer to fields that don't
exist (because a certain someone forgot to change the docs when changing
the field names, I wonder who :P).
And partial implementations in qfcc (most places will generate an
internal error (not implemented) or segfault, but some low-hanging fruit
has already been implemented).
This allows the VM to select the right execution loop and qfcc currently
still produces only the old IS (it doesn't know how to deal with the new
IS yet)
build_struct was unconditionally setting the type's alignment. This was
not a problem before because no types were requesting alignments larger
than those requested by their members (for structs). However, with the
upcoming new instruction set, quaternions need to be 4-word aligned.
For int, long, float and double. I've been meaning to add them for a
while, and they're part of the new Ruamoko instructions set (which is
progressing nicely).
The opcode table is a nightmare to maintain, but this does clean it up
and speed up opcode lookups since they can now be indexed. Of course, it
turns out I had missed adding several instructions, so had to fix that,
and qfcc needed a bit of a re-jigger to get the opcode out of the table.
The assignment to the node's variable must come after any uses of that
node, which the node's parent set indicates. In the swap test, this was
not a problem as the node had no parents, and in the link order test, it
just happened(?) to work.
While using just the label node's reachable set was sufficient for a
simple swap (t = a; a = b; b = t;), it is not sufficient for
read-before-write dependencies such as found in linked-list building:
{ o = array[ind]; o.next = obj; obj = o; }
The assignment to o.next uses obj, but that use is hidden because obj's
reachable nodes does not include o thus assigning o to obj causes the
array dereference to be assigned directly to obj and thus o.next winds
up pointing to o instead of whatever obj was. The parent nodes of obj's
node are its users, so any new assigned to obj must come after those
parents as well as any node reachable by obj's node.
Fixes a runaway loop error when adding a frikbot to the server.
qfo_to_progs was modifying the space data pointers in the input qfo,
making it impossible to reuse the qfo. However, qfo_relocate_refs needs
the updated pointers, thus do a shallow copy of the qfo and its spaces
(but not any of the data)
build_builtin_function does the right thing, and it was only legacy
syntax functions that were affected anyway. Certainly, external
variables should not be initialized, but klik uses @extern { } wrapped
around several builtin functions and I had added the feature to allow
just this as it is rather convenient.
I decided that the check for whether control reaches the end of the
function without performing some necessary action (eg, invoking
[super dealoc] in a derived -dealoc) is conceptually the return
statement using a pseudo operand and the necessary action defining that
pseudo operand and thus is the same as checking for uninitialised
variables. Thus, add a pseudo operand type and use one to represent the
invocation of [super alloc], with a special function to call when the
"used" pseudo operand is "uninitialised".
While I currently don't know what else pseudo operands could be used
for, the system should be flexible enough to add any check.
Fixes#24
I want to use the function's pseudo address that was used for managing
aliased temporary variables for other pseudo operands as well. The new
name seems to better reflect the variable's purpose even without the
other pseudo operands as temporary variables are, effectively, pseudo
operands until they are properly allocated.
Forgetting to invoke [super dealloc] in a derived class's -dealloc
method has caused me to waste far too much time chasing down the
resulting memory leaks and crashes. This is actually the main focus of
issue #24, but I want to take care of multiple paths before I consider
the issue to be done.
However, as a bonus, four cases were found :)
While get_selector does the job of getting a selector from a selector
reference expression, I have long considered lumping various expression
types under ex_expr to be a mistake. Not only is this a step towards
sorting that out, it will make working on #24 easier.
I have gotten tired of chasing memory leaks caused by me forgetting to
add [super dealloc] to my dealloc methods, so getting qfcc to chew me
out when I do seems to be a good idea (having such a warning would have
saved me many hours, just as missing return warnings have).
Well... it could be done better, but this works for now assuming it's in
/usr/include (and it's correct for mxe builts). Does need proper
autoconfiscation, though.
The portal flow stack nodes contain a simd vector, which requires
16-byte alignment. However, on 32-bit Windows, malloc returns 8-byte
aligned memory, leading to eventual segfaults. Since pstack_t is 48
bytes on 32-bit systems, it fits nicely into a 64-byte aligned cache
line (or two on 64-bit systems due to being 80 bytes).
For most (if not all) maps. The heapsort is needed only if the clustered
leafs are not contiguous, but most bsp compilers output contiguous leaf
clusters, so is just a bit of protection. The difference isn't really
noticeable on a fast machine, but no point in doing more work than
necessary.
Now that only 3852 clusters need to be checked for each cluster, fat-pvs
construction for ad_tears completes in about 0.7s, most of which seems
to be loading, conversion, compression and writing. O(N^3) cuts both
ways (hurts like crazy when N increases, does wonders when N decreases,
especially by a factor of 25). And then throw in improved cache
performance...
I suspect having an off-line compiler is still useful, but even if
qfvis's implementation never actually gets used, if cluster
reconstruction is put in the engine, large maps will be feasible even
for quakeworld. Just the reduced memory requirements alone will be a
huge benefit (~3GB down to 1.8MB).
This is only the first half (vertical) in that the vis bits are still
for the leafs rather than the clusters, but ad_tears goes from 500s to
7s for calculating the fat pvs (3852 clusters).
While this doesn't give as much of a boost as does basic sphere culling
(since it's just culling sphere tests), it took ad_tears' base vis from
1000s to 720s on my machine.
This removes the last of the arbitrary limits from qfvis. The goal is
not so much supporting crazy maps, but more about better data usage
(cluster_t is now 24 (or 16) bytes instead of 1048 (or 528). And
passages isn't used (yet?)...
It turns out cmem is not so good for many large allocations (probably a
bug in handling the blocks), but was really meant for lots of little
churning allocations anyway. After an analysis of winding lifetimes, it
became clear that the hunk allocator would work very well. The base
windings are allocated from a global hunk (currently 1GB, plenty for
even ad_tears), and ephemeral windings are allocated from a per-thread
hunk of 1MB (seems to be way more than enough: gmsp3v2 uses a maximum of
only 56064 bytes, and ad_tears got through 30% before I gave up on it).
Any speed difference (for gmsp3v2) seems to be lost in the noise: still
completing in 38.4s on my machine.
The output fat-pvs data is the *difference* between the base pvs and fat
pvs. This currently makes for about 64kB savings for marcher.bsp, and
about 233MB savings for ad_tears.bsp (or about 50% (470.7MB->237.1MB)).
I expect using utf-8 encoding for the run lengths to make for even
bigger savings (the second output fat-pvs leaf of marcher.bsp is all 0s,
or 6 bytes in the file, which would reduce to 3 bytes using utf-8).
After seeing set_size and thinking it redundant (thought it returned the
capacity of the set until I checked), I realized set_count would be a
much better name (set_count (node->successors) in qfcc does make much
more sense).
Extremely large maps take a very long time to process their PVS sets for
PHS or shadows, so having an off-line compiler seems like a good idea.
The data isn't written out yet, and the fat pvs code may not be optimal
for cache access, but it gets through ad_tears in about 500s (12
threads, compared to 2100s single-threaded in the qw server).
This reduces the overhead needed to manage the memory blocks as the
blocks are guaranteed to be page-aligned. Also, the superblock is now
alllocated from within one of the memory blocks it manages. While this
does slightly reduce the available cachelines within the first block (by
one or two depending on 32 vs 64 bit pointers), it removes the need for
an extra memory allocation (probably via malloc) for the superblock.
When moving an identifier label from one node to another, the first node
must be evaluated before the second node, which the edge guarantees.
However, code for swapping two variables
t = a; a = b; b = t;
creates a dependency cycle. The solution is to create a new leaf node
for the source operand of the assignment. This fixes the swap.r test
without pessimizing postop code.
This takes care of the core problem in #3, but there is still room for
improvement in that the load/store can be combined into a move.
This reverts commit 2fcda44ab0.
Killing the node is not the correcgt answer as it blocks many
optimization opportunities. The correct answer is adding edges to
describe the temporal dependencies. Of course, this breaks the swap.r
test.
In order to correctly handle swap-style code
{ t = a; a = b; b = t; }
edges need to be created for each of the assignments moving an
identifier lable, but the dag must remain acyclic (the above example
wants to create a cycle). Having the reachable nodes recorded makes
checking for potential loops a quick operation.
Identifiers can be constants. I don't remember quite what it fixed other
than some bogus kill relations in the dags (which might have caused
issues later).
If the src type is not a class, there is no inheritance chain to walk.
Fixes a segfault when returning self after a syntax error in the
following:
+(EditStatus *)withRect:(Rect)rect
{
return [[[self alloc] initWithRect:rect]:
}
-setCursorMode:(CursorMode)mode
{
cursorMode = mode;
return self;
}
GCC does a fairly nice job of producing code for vector types when the
hardware doesn't support SIMD, but it seems to break certain math
optimization rules due to excess precision (?). Still, it works well
enough for the core engine, but may not be well suited to the tools.
However, so far, only qfvis uses vector types (and it's not tested yet),
and tools should probably be used on suitable machines anyway (not
forces, of course).
This fixes the mightsee updates never occurring, but it doesn't make a
huge difference (though I suppose it might have back in the 90s, or with
a different map).
The stats were being updated before UpdateMightsee was getting called,
and it was incrementing the wrong value (so it would not have been
thread-safe).
While whether it's any faster is debatable (it's slightly slower, but
many more portals are being tested due to different rounding in the base
vis stage), it's certainly easier to read.
While the main bulk of the improvement (36s down from 42s for
gmsp3v2.bsp on my i7-6850K) comes from using a high-tide allocator for
the windings (which necessitated using a fixed size), it is ever so
slightly faster than using malloc as the back-end.
This is for the conversion /to/ paletted textures. The conversion is
necessary for csqc support. In the process, the conversion has been sped up
by implementing a color cache for the conversion process. I haven't
measured the difference yet, but Mr Fixit does seem to load much faster for
the sw renderer than it did before the change (many months old memory).
The server edict arrays are now stored outside of progs memory, only the
entity data itself (ie data accessible to progs via ent.fld) is stored in
progs memory. Many of the changes were due to code accessing edicts and
entity fields directly rather than through the provided macros.
Double benefit, actually: faster when building a fat PVS (don't need to
copy as much) and can be used in multiple threads. Also, default visiblity
can be set, and the buffer size has its own macro.
Sort of at the request of leileilol (a utility to create quakepal.py was
asked for, but this seems to be better approach). However, the feature is
not used yet (needs hooks in the import and export modules).
It now takes a context pointer (opaque data) that holds the buffers it
uses for the temporary strings. If the context pointer is null, a static
context is used (making those uses of va NOT thread-safe). Most calls to
va use the static context, but all such calls have been formatted
consistently so they are easy to find when it comes time to do a full
audit.
Block expressions hide ex_error, but get_type() always returns null when
it finds one (which it does by recursing into block expression), so just
check the type itself.
When a global variable is accessed via only an alias in a function the
actual def's flowvar would remain in the state it was from the last
function that accessed the global normally. This would result in invalid
flowvar accesses which can be difficult to reproduce (thus no test
case).
When a global variable is accessed via only an alias in a function the
actual def's flowvar would remain in the state it was from the last
function that accessed the global normally. This would result in invalid
flowvar accesses which can be difficult to reproduce (thus no test
case).
There's still some cleanup to do, but everything seems to be working
nicely: `make -j` works, `make distcheck` passes. There is probably
plenty of bitrot in the package directories (RPM, debian), though.
The vc project files have been removed since those versions are way out
of date and quakeforge is pretty much dependent on gcc now anyway.
Most of the old Makefile.am files are now Makemodule.am. This should
allow for new Makefile.am files that allow local building (to be added
on an as-needed bases). The current remaining Makefile.am files are for
standalone sub-projects.a
The installable bins are currently built in the top-level build
directory. This may change if the clutter gets to be too much.
While this does make a noticeable difference in build times, the main
reason for the switch was to take care of the growing dependency issues:
now it's possible to build tools for code generation (eg, using qfcc and
ruamoko programs for code-gen).
This should keep things nicely extensible, since additional data can be
done in the data space and found using defs. This gets the compilation
units into the sym file.
They worked well if there was only one source file in the test, but
failed if there were two or more. While only typelinker needed the
enhanced macros, I got them all because I generally copy the nearest
block when adding a new test thus it's best if they're all "correct".
The compilation unit stores the directory from which qfcc was run and
any source files mentioned. This is similar to dwarf's compilation unit.
Right now, this is the only data in the new debug space, but more might
come in the future so it seems best to treat the debug space separately
in the object files.
getcwd is assumed to use malloc if its buff param is null. This may need
fixing in the future, but it's in one spot. The result in "saved" in the
non-progs pool.
It never really helped sort out the path issues when using build
directories. It worked well enough for single directory projects, but
things got messy very quickly, especially when mixing ruamoko libs with
external progs. A better method based on dwarf is coming.
Killed nodes can leave stray (dangling) edges that cause some confusion
in the dot graphs and may cause problems later on down the track, so
ensure there are no dangling edges.
The reason double-alias fails is when the double assignment occurs, the
int operands don't yet have leaf nodes and thus the nodes cannot be
killed. This doesn't fix the bug.
I'm not sure if it's due more to doubles or unions, but the bug was
found via double. It seems the dags code generator doesn't see that the
assignment to the union's double field kills the two int fields.
The test passes when NOT optimizing.
Because type aliases need to be unaliased, the type pointers in the type
encodings need to be correct when it comes to linking defs and
functions. This fixes the linking errors in ruamoko/game.
I was very uncertain about the validity of messing with the old type
encoding that way, but adding the check to ensure the type has been
processed never fired, so it seems ok. And the comments help me a lot :)
When aliasing a type that already has aliases, the top node needs to be
replaced if it is unnamed, or the alias-free branch of the new node
needs to reach around to the alias-free branch of the existing node.
This fixes the bogus param counts in qwaq.
This fixes the typelinker test, but not the linking error in
ruamoko/game that it was supposed to represent. I guess there's
something more going on (maybe type encoding relocation issues).
Fixes#6
It turned out that the problem with @zero was caused by initial type
chaining occurring before the structures had been initialized and thus
the linker's @zero type encoding string was incorrect: {?=} instead of
{tag @zero-}, thus when the actual type encoding supplied by an object
file came along (with the correct encoding string), it wasn't found.
It is now "consistent" with the rest of the type building in that it
uses find_type(append_type(return, params)) like the C version, thus
allowing append_type to do its thing with type aliases. This fixes the
overload test.
This is a bit of a weird one because it's a combination of the aliasing
code and mixing C prototypes with QuakeC function definitions, and the
function type rebuilding in qc-parse.y not being very "consistent" in
its abuse of the type system.
The full_type branch of an alias splitter (alias with null name) needs
to mirror the clean side up to the type alias. It is causing problems
with functions, but that's expected because parameters complicate
things.
It's not connected up yet because I'm unsure of just where to put things
(it gets messy fast), but just being able to see the structure of
complex types is nice.
This eases type unaliasing on functions a little.
Still more to to go, but this fixes a really hair-pulling bug: linux's
heap randomiser was making the typedef test fail randomly whenever
typedef.qfo was compiled.
When a type is aliased, the alias has two type chains: the simple type
chain with all other aliases stripped, and the full type chain. There
are still plenty of bugs in it, but having the clean type chain takes
care of the major issue that was in the previous attempt as only the
head of the type-chain needs to be skipped for type comparison.
Most of the bugs are in finding the locations where the head needs to be
skipped.
All simple type checks are now done using is_* helper functions. This
will help hide the implementation details of the type system from the
rest of the compiler (especially the changes needed for type aliasing).
They take a pointer to a free-list used for hashlinks so the hashlink
pools can be per-thread. However, hash tables that are not updated are
always thread-safe, so this affects only updates. progs_t has been set
up such that it is easy for multiple progs within one thread can share
hashlinks.
and its usage. The parts of flow_analyze_statement that use it know
where the returned operand needs to go. Unfortunately, this breaks dags
pretty hard, but that's because dags needs to learn about the fancy
assignment-type statements.
I noticed that pointer math is currently incorrect in qfcc, but it would
be nice for fixing it to not break anonstruct since it is testing
something else.
This fixes the technically correct but horrible mess of temps and
addressing when dealing with ivars, and the resulting uninitialized
temps due to the non-constant pointers (do need statement level constant
folding, though).
This is part of what messed up float_val in the encoding for @params.
The other part is something in the linker type encoding merge code: it
may be too aggressive. It's also what messed up the size of @params.
That is, those created by operand_address. The dag code needs the
expression that is attached to the statement to have the correct
expression type in order to do the right thing with the operands and
aliasing, especially when generating temps. This fixes assignchain when
optimizing (all tests pass again).
This reverts commit c78d15b331.
While a block expression's result may be an l-value, block expressions
are not (and their results may not be), thus taking the address of one
is not really correct. It seems the only place that tries to do so is
the assignment code when dealing with structures.
This reverts commit b49d90e769.
I suspect this was a workaround for the mess in assignment chains.
However, it caused compile errors with the new implementation, and is
just bogus anyway.
While I still hate ".=", at least it's more hidden, and the new
implementation is a fair bit cleaner (hah, goto a label in an if (0) {}
block).
Most importantly, the expression tree code knows nothing about it. Now
just to figure out what broke func-epxr. A bit of whack-a-mole, but yay
for automated tests.
Doing it in the expression trees was a big mistake for a several
reasons. For one, expression trees are meant to be target-agnostic, so
they're the wrong place for selecting instruction types. Also, the move
and memset expressions broke "a = b = c;" type expression chains.
This fixes most things (including the assignchain test) with -Werror
turned off (some issues in flow analysis uncovered by the nil
migration: memset target not extracted).
Now convert_nil only assigns the nil expression a type, and nil makes
its way down to the statement emission code (where it belongs, really).
Breaks even more things :)
This bug drove me nuts for several hours until I figured out what was
going on.
The assignment sub-tree is being generated, then lost. It works for
simple assignments because a = b = c -> (= a (= b c)), but for complex
assignments (those that require move or memset), a = b = c -> (b = c) (a
= c) but nothing points to (b = c). The cause is using binary
expressions to store assignments.
It's not possible to take the address of constants (at this stage) and
trying to use a move instruction with .zero as source would result in
the VM complaining about null pointer access when bounds checking is on.
Thus, don't convert a nil source expression until it is known to be
safe, and use memset when it is not.
This fixes the problem of using the return value of a function as an
element in a compound initializer. The cause of the problem is that
compound initializers were represented by block expressions, but
function calls are contained within block expressions, so def
initialization saw the block expression and thought it was a nested
compound initializer.
Technically, it was a bug in the nested element parsing code in that it
wasn't checking the result value of the block expression, but using a
whole new expression type makes things much cleaner and the work done
paves the way for labeled initializers and compound assignments.
Not that it really makes any difference for labels since they're
guaranteed unique, but it does remove the question of "why nva instead
of save_string?". Looking at history, save_string came after I changed
it from strdup (va()) to nva(), and then either didn't think to look for
nva or thought it wasn't worth changing.
Multi-line calls (especially messages) got rather confusing to read as
the lines jumped back and forth. Now the binding is better but the dags
code is reordering the parameters sometimes.
The server code is not yet ready for doubles, especially in its varargs
builtins: they expect only floats. When float promotion is enabled
(default for advanced code, disabled for traditional or v6only),
"@float_promoted@" is written to the prog's strings.
That was a fair bit trickier than I thought, but now .return and .paramN
are handled correctly, too, especially taking call instructions into
account (they can "kill" all 9 defs).
This reverts commit a2f203c840.
There is indeed a world of difference between "any" and "only", and it
helps if I read the rest of the docs AND the code :P.
As expected, this does not fix the mangled pointer problem in
struct-init-param.r, but it does improve the ud-chains. There's still a
problem with .return, but it's handling in flow_analyze_statement is a
bit "special" :P.
Doing the same thing at the end of two branches of an if/else seems off.
And doing an associative(?) set operation every time through a loop is
wasteful.