* / % %% + -
As a bonus, includes partial tests for a few extra operators. Several
things are broken at this stage, but uncommitted code is already
working.
Float bit-ops as well.
Also, add q*v4 and v4*q instructions. There are currently 48 free
opcodes, and I might remove the scale instructions, but they could be
useful as expanding a single float to a vector would take 3 instructions
(copy to temp, swizzle-expand temp, multiply, vs just scale).
The swizzle instruction is very powerful in that in can do any of the
256 permutations of xyzw, optionally negate any combination of the
resulting components, and zero any combination of the result components
(even all). This means the one instruction can take care of any actual
swizzles, conjugation for complex and quaternion values, zeroing vectors
(not that it's the only way), and probably other weird things.
The python file was used to generate the jump table and actual swizzle
code.
They even found a bug in the addressing mode functions :) (I'd forgotten
that I wanted signed offsets from the pointer and thus forgot to cast
st->b to short in order to get the sign extension)
This allows the VM to select the right execution loop and qfcc currently
still produces only the old IS (it doesn't know how to deal with the new
IS yet)
When it's finalized (most of the conversion operations will go, probably
the float bit ops, maybe (very undecided) the 3-component vector ops,
and likely the CALLN ops), this will be the actual instruction set for
Ruamoko.
Main features:
- Significant reduction in redundant instructions: no more multiple
opcodes to move the one operand size.
- load, store, push, and pop share unified addressing mode encoding
(with the exception of mode 0 for load as that is redundant with mode
0 for store, thus load mode 0 gives quick access to entity.field).
- Full support for both 32 and 64 bit signed integer, unsigned integer,
and floating point values.
- SIMD for 1, 2, (currently) 3, and 4 components. Transfers support up
to 128-bit wide operations (need two operations to transfer a full
4-component double/long vector), but all math operations support both
128-bit (32-bit components) and 256-bit (64-bit components) vectors.
- "Interpreted" operations for the various vector sizes: complex dot
and multiplication, 3d vector dot and cross product, quaternion dot
and multiplication, along with qv and vq shortcuts.
- 4-component swizzles for both sizes (not yet implemented, but the
instructions are allocated), with the option to zero or negate (thus
conjugates for complex and quaternion values) individual components.
- "Based offsets": all relevant instructions include base register
indices for all three operands allowing for direct access to any of
four areas (eg, current entity, current stack frame, Objective-QC
self, ...) instructions to set a register and push/pop the four
registers to/from the stack.
Remaining work:
- Implement swizzle operations and a few other stragglers.
= Make a decision about conversion operations (if any instructions
remain, they'll be just single-component (at 14 meaningful pairs,
that's a lot of instructions to waste on SIMD versions).
- Decide whether to keep CALL1-CALL8: probably little point in
supporting two different calling conventions, and it would free up
another eight instructions.
- Unit tests for the instructions.
- Teach qfcc to generate code for the new instruction set (hah, biggest
job, I'm sure, though hopefully not as crazy as the rewrite eleven
years ago).
I wish I'd done it this way years ago (but maybe gcc 2.95 couldn't hack
the casts, I do know there were aliasing problems in the past). Anyway,
this makes operand access much more consistent for variable sized
operands (eg float vs double vs vec4), and is a big part of the new
instruction set implementation.
There is no reasonable way (due to hardware-enforced alignment issues)
to simply convert old bytecode to new (probably best done with an
off-line tool, preferably just recompiling when I get qfcc up to the
job), so both loops will need to be present. This just moves the
original loop into its own function in order to make it easy to bring in
the new (and iron out integration issues).
The opcode table is a nightmare to maintain, but this does clean it up
and speed up opcode lookups since they can now be indexed. Of course, it
turns out I had missed adding several instructions, so had to fix that,
and qfcc needed a bit of a re-jigger to get the opcode out of the table.
The switch from using pr_functions (dfunction_t) to function_table
(bfunction_t) for keeping track of the current function (and thus
profiling data) broke PR_Profile as it never saw anything but 0.
Even NUM_FOR_BAD_EDICT will have a bad day if the edict pointer is
invalid, so make sure that the entity pointer is valid (within the edict
area AND a multiple of edict size).
PR_LoadDebug now does only the initial version and crc checks, and the
byte-swapping of the loaded symbols file. PR_DebugSetSym sets up all the
pointers.
For now, the functions check for a null hunk pointer and use the global
hunk (initialized via Memory_Init) if necessary. However, Hunk_Init is
available (and used by Memory_Init) to create a hunk from any arbitrary
memory block. So long as that block is 64-byte aligned, allocations
within the hunk will remain 64-byte aligned.
Fixes#12
However, this is a bit of a band-aid in that the code for global defs
seems redundant (there is very similar code a little above that is
always executed) and the code for field defs should probably be executed
unconditionally: I suspect the problem fixed by
d5454faeb7 still shows with game coded
compiled with recent versions of the compiler, I just haven't tested
any.
Support for finding the first address associated with a source line was
added to the engine, returning 0 if not found.
A temporary breakpoint is set and the progs allowed to run free.
However, better handling of temporary breakpoitns is needed as currently
a "permanent" breakpoint will be cleared without clearing the temporary
breakpoing if the permanent breakpoing is hit while execut-to-cursor is
running.
Legacy progs do not have the extended defs data (and usually won't have
anything more complicated than a vector), so use the basic type size for
the def size. Fixes broken edict prints.
The merge with the improvements I made while hacking on csqc (still
undecided as to whether to continue that project) resulted in the size
of the progs string area getting mangled when no heap was allocated for
the progs due to a null zone pointer being used in some pointer
arithmetic. Fixes random(!!!) invalid string error in qfprogs.
While this caused some trouble for pr_strings and configurable strftime
(evil hacks abound), it's the result of discovering an ancient (from
maybe as early as 2004, definitely before 2012) bug in qwaq's printing
that somehow got past months of trial-by-fire testing (origin understood
thanks to the warning finding it).
The server edict arrays are now stored outside of progs memory, only the
entity data itself (ie data accessible to progs via ent.fld) is stored in
progs memory. Many of the changes were due to code accessing edicts and
entity fields directly rather than through the provided macros.
I never liked that some of the macros needed the type as a parameter
(yay typeof and __auto_type) or those that returned a value hid the
return statement so they couldn't be used in assignments.
Still "some" more to go: a pile to do with transforms and temporary
entities, and a nasty one with host_cbuf. There's also all the static
block-alloc lists :/
There's still some cleanup to do, but everything seems to be working
nicely: `make -j` works, `make distcheck` passes. There is probably
plenty of bitrot in the package directories (RPM, debian), though.
The vc project files have been removed since those versions are way out
of date and quakeforge is pretty much dependent on gcc now anyway.
Most of the old Makefile.am files are now Makemodule.am. This should
allow for new Makefile.am files that allow local building (to be added
on an as-needed bases). The current remaining Makefile.am files are for
standalone sub-projects.a
The installable bins are currently built in the top-level build
directory. This may change if the clutter gets to be too much.
While this does make a noticeable difference in build times, the main
reason for the switch was to take care of the growing dependency issues:
now it's possible to build tools for code generation (eg, using qfcc and
ruamoko programs for code-gen).
This allows a debugger to do any symbol lookups and other preparations
between loading progs and the first code execution. .ctors are called as
per normal if debug_handler is not set.
This is the first step in reworking PR_Sprintf to use a state machine.
The goal is to make it more robust against errors and easier to extend
(eg, * width and precision).
And rename prd_exit to prd_terminate (the idea is the host will
terminate the VM). This makes it possible for the debugger to pause the
VM before any code, even a builtin function, is executed. Breaks the
debugger source window, but only because it's not updating on file
change (I think).
I decided I want events for VM enter/exit but enter needs to somehow
pass the function which will be executed (even if a builtin). A generic
void * param seemed the best idea, which meant the error string could be
passed via the param instead of a "global" string in the progs struct.
They take a pointer to a free-list used for hashlinks so the hashlink
pools can be per-thread. However, hash tables that are not updated are
always thread-safe, so this affects only updates. progs_t has been set
up such that it is easy for multiple progs within one thread can share
hashlinks.
While there was a breakpoint hook, it was for only breakpoints and more
was needed. Now there's a generic hook that is called for tracing,
breakpoints, watch points, runtime errors and VM errors, with the
"event" type passed as the first parameter and a data pointer in the
second.
The idea is to find th def that contains the address. Had to write my
own bsearch (well... lifted from wikipedia) because libc's is exact. The
defs are assumed to be sorted (which qfcc now ensures when it writes
progs and sym files).
Type encodings are used whenever they are available. For now, if they
are not, then everything is treated as void (which prints <void>, not
very useful). Most return statements and references to .return are now
very readable (excluding structs), and only params going through "..."
are a messy union.
The memset instructions now match the move* instructions other than the
first operand (always int). Probably breaks much, but fixed in next few
commits.
If a temp string is found in the return slot, PR_FreeTempStrings won't
delete the string. However, PR_PopFrame was blindly stomping on the
possibly surviving temp string with the push strings, which would cause
a leak.
This "pushes" a temp string onto the callee's stack frame after removing
it from the caller's stack frame. This is so builtins can pass
auto-freed memory to called progs code. No checking is done, but mayhem
is likely to ensue if a string is pushed that was allocated in an
earlier frame.
PR_AllocTempBlock() works the same way as PR_SetTempString(), except
that it takes a size parameter and always allocates (never tries to
merge). This is, in a way, abusing the string system, but I needed a way
to allocate a block of progs memory that would be automatically freed
when the current frame ended. The biggest abuse is the need to cast away
the const of PR_GetString()'s return value.
Rather than relying on progs code version, use the string to determine
whether PR_Sprintf should behave as if floats have been promoted through
... I imagine I'll get to the rest of the server code at some stage.
With these two changes, nq-x11 works again (teleporters were the
symptom).
With this, the VA is very close to being safe to use in a threaded
environment (so long as each VM is used by only one thread). Just the
debug file hash and source paths to sort out.
The progs execution code will call a breakpoint handler just before
executing an instruction with the flag set. This means there's no need
for the breakpoint handler to mess with execution state or even the
instruction in order to continue past the breakpoint.
The flag being set in a progs file is invalid.
For technical reasons (programmer laziness), qfcc does not fix up local
def type encodings when writing the debug symbols file (type encoding
location not readily accessible).
The debug subsystem now uses the resources system to ensure it cleans
up, and its data is now semi-private. Unfortunately, PR_LoadDebug had to
remain public for qfprogs because using PR_RunLoadFuncs would cause
builtin resolution to complain.
It is now set to 0 when progs are loaded and every time
PR_ExecuteProgram() returns. This takes care of the default case, but
when setting parameters, pr_argc needs to be set correctly in case a
vararg function is called.
PR_SaveParams() is required for implementing the +initialize diversion
used by Objective-QuakeC because builtins do not have local def spaces
(of course, a normal stack calling convention would help). However, it
is entirely possible for a call to +initialize to trigger another call
to +initialize, thus the need for stacking parameter stashes. As a
bonus, this implementation cleans up some fields in progs_t.
The engine now requires non-v6 progs to store the log2 alignment for the
param struct in .param_alignment.
PR_EnterFunction is clearer and possibly more efficient.
Only as scalars, I still need to think about what to do for vectors and
quaternions due to param size issues. Also, doubles are not yet
guaranteed to be correctly aligned.
I've decided that setting pr.max_edicts and pr.zone_size as part of the
local progs initialization rather than in PR_LoadProgsFile makes more
sense. For one, it is unlikely for the limits to change every time progs is
reloaded. Also, they seem to be a property of the VM rather than the progs.
However, there is nothing stopping the caller from updating max_edicts and
zone_size every call.
While scan-build wasn't what I was looking for, it has proven useful
anyway: many of the sizeof errors were just noise, but a few were actual
bugs (allocating too much or too little memory).
The offset to compensate for st++ was missing.
Obviously, the code has never been tested. Found while looking at the
jump code and thinking about using 32-bit addresses for the jump tables.
I'd forgotten that ED_ConvertToPlist mangled light into light_lev and
single component angle values into a vector. This fixes much of the
breakage in qflight (but not the light levels)
It was pointed out by Blub\w (gmqcc) that OP_MUL_FV and friends were buggy
when the operands overlapped (eg, x = x.x * x) as the result would become
'x.x*x.x x.y*x.x*x.x x.z*x.x*x.x' (note the x.x squared for y and z). On
testing, sure enough the bug was present (and is a nice demonstration that
QF's VM does NOT have strict-aliasing bugs). As a very nice benefit: the
code produced by the fixes is actually faster than the broken version :).
The ruamoko code used for testing:
void (string fmt, ...) printf = #0;
vector foo (vector x)
{
x = x * x.x;
return x;
}
vector bar (vector x)
{
x = x.x * x;
return x;
}
int main ()
{
vector x = '2 3 4';
vector y = foo (x);
vector z = bar (x);
printf ("x=%v y=%v z=%v 2*x=%v\n", x, y, z, 2*x);
return 0;
}
Need to up the precision by one due to the difference between g and e, but
much prettier. Might need to rename that function :P I wish I'd thought to
check if g would work, but thanks to divVerent for the suggestion.
Normally, the order doesn't matter, but when tracing code, it becomes very
difficult to tell where the trace ends and the dump begins. Printing the
message first puts the message between the trace and the dump: much easier
:)
Aliasing the jump table to an integer broke statement_get_targetlist with
the new alias def handling, and was really wrong anyway. I probably did
that due to being fed up with things and wanting to get qfcc working again
rather than spending time getting jumpb right.
Using "=" was rather confusing, so changing it to "<CONV>" seems to be a
good idea. As the string is used only for selecting opcodes at compile
time, only qfcc is affected.
Certain versions of qcc (fteqcc comes to mind :P) strip the names from
builtin functions. This breaks saved games that happen to have a builtin
function in a saved function variable. The earlier builtin name
reconstruction patch happened to fix the writing of save games for such
progs, and this one fixes the reading.
That is, the descriptors loaded from the progs file. Some compilers (eg,
fteqcc :P) strip builtin names from the progs, which makes debugging
difficult.
Rather than checking the raw edict count in the entities file against the
progs' max_edicts, check the allocated entity's number. This allows loading
of sophisticated maps (eg, digs04) that prune many of their entities.
There will normally be only one unnamed field (if any), and it's always the
null field. This will put an eventual end to the "'' is not a field"
messages.
o All instances of LIBADD/LDADD have a corresponding DEPENDENCIES
specificatiion.
o libraries now use a lib_ldflags macro to keep things consistent
o duplication of source/lib names has been minimized (particularly in
the libraries; more work needs to be done for the executables)
o automake spec blocks have been organized (again, more work needs to be
done for the executables)
qcc always used vector stores to load values into the function parameters,
but if the location of the value is too close to the end of the global data
block (the vector spans the end of the block), it would trigger the bounds
check code. Thus, allow such instructions without a murmer, so long as it
actually is a parameter write.
In the original save gave format, global vectors were saved as individual
components rather than as a single vector, using the _x/_y/_z tags on the
vector name. However, recent qfcc completely dumped vector components as
separate defs, so old save games would have trouble loading with progs
built with a recent qfcc. Thus, do the component translation if necessary.
I was wondering why that parrot was dead.
Not realizing that negke's coag3 map had too many entities really ruined
the pleasure of playing it, so it's best to treat such situations as an
error (max_edicts can be bumped up to 32000 if need be, but 2048 is plenty
for his map).
I don't know about other FTEqcc compiled progs, but quoth doesn't try to do
anything clever (all its denormals are either vars of the wrong type, or
-0.0).
Some QuakeC compilers (eg, FTE) insert a data chunk between the progs
header and the rest of the progs data. Unfortunately, FTE does not maintain
the assumed 32-bit alignment.