Also remove the extern for current_storage as it belongs in shared.h.
I'm not satisfied with the documentation for initialize_def, but it will do
for now. I probably have to rewrite the thing as it's a bit of a beast.
With the intoduction of the statement type enum came a prefix clash. As
"st" makes sense for "statement type", I decided that "storage class"
should be "sc". Although there haven't been any problems as of yet, I
decided it would be a good idea to clean up the clash now. It also helps
avoid confusion (I was a bit surprised after working with st_assign etc to
be reminded of st_extern etc).
It doesn't quite work yet, but...
It has proven necessary to know what type .return has at any point in the
function. The segfault in ctf is caused by the return statement added to
the end of the void function messing with the expr pointer stored in the
daglabel for .return. While this is actually by design (though the
statement really should have a valid expr pointer rather than), it actually
highlights a bigger problem: there's no stable knowledge of the current
type of .return. This is not a problem in expression statements as the
dagnodes for expression statements store the desired types of all operands.
However, when assigning from .return to attached variables in a leaf node,
the type of .return is not stored anywhere but the expression last
accessing .return.
Now information like dags or live variables are dumped separately, and the
live variable information replaces the flow node in the diagram (like dags
have recently).
They really should have been in statements.[ch] in the first place
(actually, they sort of were: is_goto etc, so some redundant code has been
removed, too).
The evil comment is not just "pragmas are bad, ok?", but switching between
advanced, extended and tradtitional modes when compiling truly is evil and
not guaranteed to work. However, I needed it to make building test cases
easier (it's mostly ok to go from advanced to extended or tradtional, but
going the other way will probably cause all sorts of fun).
In the process, opcode_init now copies the opcode table data rather than
modifying it.
After running across a question about lists of animation frames and states,
I decided giving qfcc the ability to generate such lists might be a nice
distraction from the optimizer :) Works for both progs.src and separate
compilation. No frame file is generated if no macros have been created.
It is necessary to know if a def is a function parameter so it can be
treated as initialized by the flow analyzer. The support for the flag in
object files is, at this stage, purely for debugging purposes.
.return and .param_N are not classed as global variables for data flow
analysis. .return is taken care of by return statements, and .param_N by
call statements.
With this, the menus work up to attempting to load the menu plist.
Something is corrupting zmalloc's blocks.
With temp types changing and temps being reused within the one instruction,
the def type is no longer usable for selecting the opcode. However, the
operand types are stable and more correct.
Nicely, the need for dag_gencode to recurse seems to have been removed.
At least for a simple case, correct code is generated :)
switch.r:49: case 1: *to = *from++;
003b loadbi.i *(from + 0), .tmp10
003c add.i from, .imm, from
003d storep.i .tmp10, *to
It doesn't make any difference yet, but that's because I need to add extra
edges indicating iter-node dependencies. However, the sort does seem to
work for its limited input.
While things are quite broken now (very incorrect code is being generated),
the dag is much easier to work with. The dag is now stored in an array of
nodes (the children pointers are still used for dagnode operands), and sets
are used for marking node parents, attached identifiers and (when done,
extra edges).
Instead of storing the generating statement in the dagnode, the generating
expression is stored in the daglabel. The daglabel's expression pointer is
updated each time the label is attached to a node. Now I know why debugging
optimized code can be... interesting.
It now seems to generate correct code for each node. However, node order is
still incorrect in places (foo++ is being generated as ++foo). quattest.r
actually executes and produces the right output :)
flow_analyze_statement uses the statement type to quickly determin which
operands are inputs and which are outputs. It takes (optional) sets for
used variables, defined variables and killed variables (only partially
working, but I don't actually use kill sets yet). It also takes an optional
array for storing the operands: index 0 is the output, 1-3 are the inputs.
flow_analyze_statement clears any given sets on entry.
Live variable analysis now uses the sets rather than individual vars. Much
cleaner code :).
Dags are completely broken.
The types are expression, assignment, pointer assignment (ie, write to a
dereferenced pointer), move (special case of pointer assignment), state,
function call/return, and flow control. With this classification, it will
be easier (less code:) to determine which operands are inputs and which are
outputs.
Surprisingly, I don't yet have to "throw one out", but things are still
problematic: rcall1 is getting two arguments, goto and return get lost,
rcall2 got an old temp rather than the value it was supposed to, but
progress :)
First, it turns out using daglabels wasn't such a workable plan (due to
labels being flushed every sblock). Instead, flowvars are used. Each actual
variable (whether normal or temp) has a pointer to the flowvar attached to
that variable.
For each variable, the statements that use or define the variable are
recorded in the appropriate set attached to each (flow)variable.
The flow graph nodes are now properly separated from the graph, and edge
information is stored in the graph struct. This actually made for much
cleaner code (partly thanks to the use of sets and set iterators).
Flow graph reduction has been (temporarily) ripped out as the entire
approach was wrong. There was also a bug in that I didn't really understand
the dragon book about selecting nodes and thus messed things up. The
depth-first search tree "fixed" the problem, but was really the wrong
solution (sledge hammer :P).
Also, now that I understand that dot's directed graphs must be acyclic, I
now have much better control over the graphs (back edges need to be
flipped).
The reduction is performed itteratively until the graph is irreducible, but
such that each reduction wraps the previous graph. Unfortunately, due
depth-first searching not being implemented, graphs that should be reduced
(ie, those with natural loops).
set_first() now returns a pointer to a setstate_t struct that holds the
state necessary for scanning a set. set_next() will automatically delete
the state block when the end of the set is reached. set_delstate() is also
provided to allow early termination of the scan.
They're now dot_sblock.c and print_sblock. The new names both better
reflect their purpose and free up "flow" for outputting the real flow
analysis graphs.
Much of the data recently added to sblock_t has been moved to flownode_t.
No graph reduction is carried out yet, but the initial (innermost level)
graph has been built.
All internal structs now have "proper" names, and fit the naming convention
(eg, obj_module (like objective-c's types, but obj instead of objc). Some
redundant types got removed (holdovers from before proper struct tag
handling).
Also, it has proven to be unnecessary to build internal classes, so
make_class and make_class_struct are gone, too.
When encoding a type to a qfo file, the type's encoding string is written
and thus needs to be valid prior to actually doing the encoding. The
problem occurs mostly in self-referential structs (particularly, obj_class)
because the struct is being encoded prior to the pointer to the struct.
Type names are cleaned up, as is the creation. Also, the class pointer in
the type encoding now gets emitted. However, Still need to actually create
_OBJ_CLASS_Class and fix the type encoding reloc handling in the linker.
Since gnu bison and flex are required anyway, no harm in using their api
prefix options. Now, qfcc can compile both QC/Ruamoko and Pascal files
(Pascal is (currently?) NOT supported in progs.src mode), selecting the
language based on the extension: .r, .qc and .c select QC/Ruamoko, .pas and
.p select Pascal, while anything else is treated as an object file (as
before).
The output can be controlled via --block-dot (not yet documented). The
files a named <sourcefile>.<function>.<stage>.dot. Currently, stage will be
one of "initial" (after expression to statement conversion), "thread"
(after jump threading), "dead" (after dead block removal), "final" (final
state before actual code emission).
It is inteded for flagging buggy conditions in the compiler, particularly
after having fixed the original bug (in case something comes back from the
dead).
Despair has things locked down such that running qfcc during a build fails
due to lack of read access to /usr/local/lib. This is actually a good
thing as accidentally hitting old includes/libs (when a file gets deleted
in the tree) hides bugs. Thus, --no-default-paths to turn off default
search paths.
Statement operands throw away the high level type information, so store
type size in the operand and use this size for allocating space for temps
rather than using the low-level type.
"vector-components" in code options controls this feature. The default is
off for advanced code and on for traditional code. Disabling
vector-components prevents the comonent names polluting the namespace and
reduces the number of globals needed for vector fields if the components
of that field are never used.
There is much breakage, but qfcc now produces a progs.dat from either
progs.src or object files. Better yet, the progs.src result is passed
through the linker, removing much duplicate code.
Structures etc now never encode the fields as well (I might revert that
at some time). Type parsing has been removed: it has proven unnecessary
and overly complicated, and the new qfo encoding should be more useful.
Access to struct fields in near data can be done using only one operand,
but offset relocs need to be used. However, as not all defs want offset
relocs, a flag has been added to the def struct.
The space is meaningless for op_* relocations as they are always in the
code space, but def_* relocations need to know which space holds the
location to be adjusted.
find_type now operates recursively (depth-first) so built up typechains
work as expected.
@overload is treated as a specifier (directly as a storage class, similaar
to typedef).
Separate compilation will take fixing object files.
The generated code is broken due to various relocation fixups being
broken, and float immediates seem to be badly broken.
Debug information is broken too.
The first function seems to work fine, but there's a problems with the
scope of params causing params to get their knickers in a twist (tangled
linked list).
Since there is now a proper symbol table, defs are now just references to
memory locations and the symbol table takes care of duplicates.
Also, start using far data for ObjQC structures.
The qfo functions have been stubbed out until I figure out what to do with
object files in the new scheme.
That which isn't rewritten is horribly broken. However, this does include a
nice mechanism for building QC structs for emitting data.
emit.c has been gutted in the spirit of "throw one away".
There is much work to be done to get even variables emitted, let alone
code. Things should be a little more fun from here on.
Use "@reference ClassName;" or "@reference ClassName(CategoryName);" to
create a refence to the class or category, forcing the defining object file
to be linked into the program when the object file is part of a library.