The method is still held by known_methods, so freeing it causes grief.
However, it may cause a leak thus the free is only commented out. More
investigation is needed. I'm surprised the problem didn't show on linux,
but cygwin-native hit it and valgrind on linux found the spot :)
While it does get a bit cluttered currently, being able to see the
contents of structures makes a huge difference. Also highlights that
vector immediates do not get the correct type encodings.
This fixes the internal error generated by the likes of
`(sv_gravity * '0 0 1')` where sv_gravity is a float and `'0 0 1'` is an
ivec3: the vector is promoted to vec3 first so that expanding sv_gravity
is expanded to vec3 instead of ivec3 (which is not permitted for a
float: expansion requires the destination base type to be the same as
the source).
For now, anyway, as the generated code looks good. There might be
problems with actual pointer expressions, but it allows entity.field to
work as expected rather than generate an ICE.
The resultant unicode is encoded as utf-8, which does conflict with the
quake character map, but right now unicode is useful only with font
text, and those support only standard unicode (currently only as utf-8),
but something will need to be sorted out.
Arrays are passed as a pointer to the first element, so are always valid
parameters. Fixes a bogus "formal parameter N is too large to be passed
by value" error.
While swizzle does work, it requires the source to be properly aligned
and thus is not really the best choice. The extend instruction has no
alignment requirements (at all) and thus is much better suited to
converting a scalar to a vector type.
Fixes#30
It seems clang loses track of the usage of the referenced unions by the
time the code leaves the switch. Due to the misoptimization, "random"
values would get into the vector constants. This puts the usages in the
same blocks as the unions, causing clang to "get it right" (though I
strongly suspect I was running into UB).
While I might need to tighten up the rules later, this allows binary
operations between vector (the type) and explicitly typed vec3 constants
(and non-constants, about which I am undecided). The idea is that
explicit constants such as '1 2 3'f should be compatible with either
type.
This applies to quaternions as well.
As a class's ivars are built up by inheritance, but with only that
class's ivars in the symbol table, is is necessary to include an offset
based on the super class's ivars in order to ensure alignments are
respected. This is achieved via the new `base` parameter to
build_struct(), which is used to offset the current size while
calculating the aligned offset of the symbols. The parameter is ignored
for unions, as they always start at 0. The ivars for the current class
still have a base offset of 0 until they are actually added to the
class.
Fixes#29
The alignment is specified as a power of 2 (ie, actual alignment = 1 <<
alignment) allowing old object files to be compatible (as their
alignment is 0). This is necessary for (in part for #30) as it turned
out even global vectors were not aligned correctly.
Currently, only data spaces even vaguely respect alignment. This may
need to be fixed in the future.
Most were pretty easy and fairly logical, but gib's regex was a bit of a
pain until I figured out the real problem was the conditional
assignments.
However, libs/gamecode/test/test-conv4 fails when optimizing due to gcc
using vcvttps2dq (which is nice, actually) for vector forms, but not the
single equivalent other times. I haven't decided what to do with the
test (I might abandon it as it does seem to be UD).
Defs and symbols benefit from swizzling as that's one instruction vs 2-3
for loading a scalar into a vector component by component. Constants are
ok because the result gets converted to a vector constant.
qfcc is putting two temps in the same location due to
defspace_alloc_aligned_loc returning the same address when there was a
hole caused by an earlier aligned alloc: specifically, a size-3 hole and
a size-2 allocation with alignment-2.
The destination operand must be a full four component vector, but the
source can be smaller and small sources do not need to be aligned: the
offset of the source operand and the swizzle indices are adjusted. The
adjustments are done during final statement emission in order to avoid
confusing the data flow analyser (and that's when def offsets are known).
This reverts commit 2904c619c1.
In order to support swizzle operations, I need to be able to alias defs
to larger types (eg, float to vec4), but alias_def rightly won't allow
this. However, as the plan is to do this in the final steps before
emitting the instruction, I plan on creating an alias to a float then
adjusting the type in the alias, but to do so without extra shenanigans,
I need alias_def to allow aliases to the same type. As a fringe benefit,
it makes the code agree with the comment in def.h :P
This came up when investigating an internal error from the line above.
It turned out the error was correct (problem with converting scalars to
vectors), but the break was not.
Currently, only vector/vec3 and quaternion/vec4 can be printed anyway,
but I plan on making explicit format strings for the types, so there
should be no need to promote any vector types (and really, any hidden
promotion is a bit of a pain, but standards...).
While the code would handle int vector types, there aren't any such
instructions, and the expression code shouldn't generate them, but all
float (32 and 64 bit) vector types do have a dot product instruction, so
check width rather than just vector/quaternion.
This fixes an error that's been lurking for over two years (since I made
parameters unlimited internally). The problem was the array was being
allocated on the stack and a simple struct copy was used to store type
type, resulting in a dangling pointer onto the stack. I'm surprised it
didn't cause more problems.
This allows all the tests to build and pass. I'll need to add tests to
ensure warnings happen when they should and that all vec operations are
correct (ouch, that'll be a lot of work), but vectors and quaternions
are working again.
Vector expressions no longer auto-widen due to the new vector types (I
might add such later, but for now this lets the tests try to build
(minus actual fixes in qfcc)).
With this, all vector widths and types are supported: 2, 3, 4 and int,
uint, long, ulong, float and double, along with support for suffixes to
make the type explicit: '1 2'd specifies a dvec2 constant, while '1 2 3'u
is a uivec3 constant. Default types are double (dvec2, dvec3, dvec4) for
literals with float-type components, and int (ivec2...) for those with
integer-type components.
Having three very similar sets of code for outputting values (just for
debug purposes even) got to be a tad annoying. Now there's only one, and
in the right place, too (with the other value code).
I'd created new_value_expr some time ago, but never used it...
Also, replace convert_* with cast_expr to the appropriate type (removes
a pile of value check and create code).
Use with quaternions and vectors is a little broken in that
vec4/quaternion and vec3/vector are not the same types (by design) and
thus a cast is needed (not what I want, though). However, creating
vectors (that happen to be int due to int constants) does seem to be
working nicely otherwise.
Nicely, I was able to reuse the generated conversion code used by the
progs engine to do the work in qfcc, just needed appropriate definitions
for the operand macros, and to set up the conversion code. Helped
greatly by the new value load/store functions.
pr_type_t now contains only the one "value" field, and all the access
macros now use their PACKED variant for base access, making access to
larger types more consistent with the smaller types.
This is an extremely extensive patch as it hits every cvar, and every
usage of the cvars. Cvars no longer store the value they control,
instead, they use a cexpr value object to reference the value and
specify the value's type (currently, a null type is used for strings).
Non-string cvars are passed through cexpr, allowing expressions in the
cvars' settings. Also, cvars have returned to an enhanced version of the
original (id quake) registration scheme.
As a minor benefit, relevant code having direct access to the
cvar-controlled variables is probably a slight optimization as it
removed a pointer dereference, and the variables can be located for data
locality.
The static cvar descriptors are made private as an additional safety
layer, though there's nothing stopping external modification via
Cvar_FindVar (which is needed for adding listeners).
While not used yet (partly due to working out the design), cvars can
have a validation function.
Registering a cvar allows a primary listener (and its data) to be
specified: it will always be called first when the cvar is modified. The
combination of proper listeners and direct access to the controlled
variable greatly simplifies the more complex cvar interactions as much
less null checking is required, and there's no need for one cvar's
callback to call another's.
nq-x11 is known to work at least well enough for the demos. More testing
will come.
The "not" because I'm pretty sure they're false positives due to when
the function is called, but clang doesn't know that (wonder why gcc was
ok with it).
It seems clang defaults to unsigned for enums. Interestingly, gcc was ok
with the checks being either way. I guess gcc treats enums that *can* be
unsigned as DWIM.
In working with vectors and matrices while testing the scene wrappers, I
found that there was a fair bit of confusion about how large something
could be. Return values can be up to 32 words (but qfcc wasn't aware of
that), parameters were limited to 4 words still (and possibly should be
for varargs), and temp defs were limited to 8 words (1 lvec4). Temps are
used for handling return values (at least when not optimizing) and thus
must be capable of holding a return value, and passing large arguments
through *formal* parameters should be allowed. It seems reasonable to
limit parameter sizes to return value sizes.
A temp and a move are still used for large return values (4x4 matrix),
but that's an optimization issue: the code itself is at least correct.