While the main bulk of the improvement (36s down from 42s for
gmsp3v2.bsp on my i7-6850K) comes from using a high-tide allocator for
the windings (which necessitated using a fixed size), it is ever so
slightly faster than using malloc as the back-end.
This is for the conversion /to/ paletted textures. The conversion is
necessary for csqc support. In the process, the conversion has been sped up
by implementing a color cache for the conversion process. I haven't
measured the difference yet, but Mr Fixit does seem to load much faster for
the sw renderer than it did before the change (many months old memory).
This separate the FOV calculations from other refdef calcs, cleaning up the
renderer proper and making it easier for other parts of the engine (eg,
csqc) to update the fov.
The server edict arrays are now stored outside of progs memory, only the
entity data itself (ie data accessible to progs via ent.fld) is stored in
progs memory. Many of the changes were due to code accessing edicts and
entity fields directly rather than through the provided macros.
Loading is broken for multi-file image sets due to the way images are
loaded (this needs some thought for making it effecient), but the
Blender environment map loading works.
They're unlit (fullbright, but that's nothing new for quake), but
working nicely. As a bonus, sort out the sky pass (forced to due to the
way command buffers are used).
There were actually several problems: translucency wasn't using or
depending on the depth buffer, and the depth buffer wasn't marked as
read-only in the g-buffer pass. Getting that correct seems to have given
bigass1 a 0.5% boost (hard to say, could be the usual noise).
That was... easier than expected. A little more tedious that I would
have liked, but my scripting system isn't perfect (I suspect it's best
suited as the output of a code generator), and the C side could do with
a little more automation.
Other than dealing with shader data alignment issues, that went well :).
Nicely, the implementation gets the explicit scaling out of the shader,
and allows for a directional flag.
I never liked that some of the macros needed the type as a parameter
(yay typeof and __auto_type) or those that returned a value hid the
return statement so they couldn't be used in assignments.
Still "some" more to go: a pile to do with transforms and temporary
entities, and a nasty one with host_cbuf. There's also all the static
block-alloc lists :/
Light styles and shadows aren't implemented yet.
The map's entities are used to create the lights, and the PVS used to
determine which lights might be visible (ie, the surfaces they light).
That could do with some more improvements (eg, checking if a leaf is
outside a spotlight's cone), but the concept seems to work.
Double benefit, actually: faster when building a fat PVS (don't need to
copy as much) and can be used in multiple threads. Also, default visiblity
can be set, and the buffer size has its own macro.
Useful for avoiding a pile of wrapper functions that merely pass on
command-specific data to the actual implementation. Used to clean up the
wrappers in nq and qw cl_input.c
This is the first step towards component-based entities.
There's still some transform-related stuff in the struct that needs to
be moved, but it's all entirely client related (rather than renderer)
and will probably go into a "client" component. Also, the current
components are directly included structs rather than references as I
didn't want to deal with the object management at this stage.
As part of the process (because transforms use simd) this also starts
the process of moving QF to using simd for vectors and matrices. There's
now a mess of simd and sisd code mixed together, but it works
surprisingly well together.
The plan is to have a fully component based entity system. This adds
hierarchical transforms. Not particularly useful for quake itself at
this stage, but it will allow for much more flexibility later on,
especially when QuakeForge becomes more general-purpose.
This seems to be pretty close to as fast as it gets (might be able to do
better with some shuffles of the negation constants instead of loading
separate constants).
It's not used yet as work needs to be done to better support generic
entities, but this is the next step to real-time lighting (though, to be
honest, I expect it will be too slow to be usable).
The main purpose is to allow fluent-style:
const char *targetname = PL_String (PL_ObjectForKey (entity, "targetname"));
if (targetname && !PL_ObjectForKey (targets, targetname)) {
PL_D_AddObject (targets, targetname, entity);
}
[note: the above is iffy due to ownership of entity, but the code from
which the above comes works around the issue]
Static lights are yet to come (so the screen is black most of the time),
but dynamic lights work very nicely (and look very good) despite the
falloff being incorrect.
While I could reconstruct the position from the screen coords and depth,
this is easier and good enough for now. Reconstruction is an
optimization thing.
Lighting doesn't actually do lights yet, but it's producing pixels.
Translucent seems to be working (2d draw uses it), and compose seems to
be working.
After getting lights even vaguely working for alias models, I realized
that it just wasn't going to be feasible to do nice lighting with
forward rendering. This gets the bulk of the work done for deferred
rendering, but still need to sort out the shaders before any real
testing can be done.
The order in which keys are added to the dictionary object is
maintained. Adding a key after removing an old key adds the new key to
the end of the list rather than reusing the old key's spot.
PL_ParseLabeledArray works the same way as PL_ParseArray, but instead
takes a dictionary object. The keys of the items are ignored, and the
order is not preserved (at this stage), but this is a cleaner solution
to getting an array of objects when the definitions of those objects
need to be accessible by name as well.
It turns out I had conflated frame buffers with frames and wound up
making a minor mess when separating the number of frames the renderer
could have in flight from the number of swap-chain images. This is the
first step towards correcting that mistake.
It's not entirely there yet, but the basics are working. Work is still
needed for avoiding duplication of objects (different threads will have
different contexts and thus different tables, so necessary per-thread
duplication should not become a problem) and general access to arbitrary
fields (mostly just parsing the strings)
This allows plist objects to be accessed directly from cexpr expressions
using struct.field syntax for dictionary objects and array[index] syntax
for array objects.
The node struct was 72 bytes thus two cache line. Moving the pointer
into the brush model data block allows nodes to fit in a single cache
line (not that they're aligned yet, but that's next). It doesn't seem to
have made any difference to performance (at least in the vulkan
renderer), but it hasn't hurt, either, as the only place that needed the
parent pointer was R_MarkLeaves.
It's not quite as expected, but that may be due to one of msaa, the 0-15
range in the palette not being all the way to white, the color gradients
being not quite linear (haven't checked yet) or some combination of the
above. However, it's that what should be yellow is more green. At least
the zombies are no longer white and the ogres don't look like they're
wearing skeleton suits.
Doesn't seem to make much difference performance-wise, but speed does
seem to be fill-rate limited due to the 8x msaa. Still, it does mean
fewer bindings to worry about.
This is a big step towards a cleaner api. The struct reference in
model_t really should be a pointer, but bsp submodel(?) loading messed
that up, though that's just a matter of taking more care in the loading
code. It seems sensible to make that a separate step.
I've decided that alias model skins should be in a single four-level
array texture rather than spread over four textures, but there's no way
I want to write that code again: getting it right was hard enough the
first time :P
It now takes a context pointer (opaque data) that holds the buffers it
uses for the temporary strings. If the context pointer is null, a static
context is used (making those uses of va NOT thread-safe). Most calls to
va use the static context, but all such calls have been formatted
consistently so they are easy to find when it comes time to do a full
audit.
It's a tad bogus as it's the lights close to the camera, but it should
at least be a good start once things are working. There's currently
something very wrong with the state of things.
The dynamic array macros made this much easier than last time I looked
at it, especially when it came to figuring out the bad memory accesses
that I seem to remember from my last attempt 9 years ago.
This makes tex_t more generally useable and probably more portable. The
goal was to be able to use tex_t with data that is in a separate chunk
of memory.
The sky texture is loaded with black's alpha set to 0. While this does
hit both layers, the screen is cleared to black so it shouldn't be a
problem (and will allow having a skybox behind the sheets).
Glow map and sky sheet and cube need to wait until I can get some
default textures going, but the world is rendering correctly otherwise
(though a tad dark: need to do a gamma setting).
It now uses the ring buffer code I wrote for qwaq (and forgot about,
oops) to handle the packets themselves, and the logic for allocating and
freeing space from the buffer is a bit simpler and seems to be more
reliable. The automated test is a bit of a joke now, though, but coming
up with good tests for it... However, nq now cycles through the demos
without obvious issue under the same conditions that caused the light
map update code to segfault.
Vulkan validation (quite rightly) doesn't like it when the flush range
goes past the end of the buffer, but also doesn't like it when the flush
range isn't cache-line aligned, so align the size of the buffer, too.
I had originally planned on mixing the stage management with general
texture support code like I did in glsl, but I think that was a mistake
and I did keep looking for scrap.[ch] when I wanted to edit something to
do with the scrap...
This cleans up texture_t and possibly even improves locality of
reference when running through texture chains (not profiled, and not
actually the goal).
It optionally generates mipmaps, and supports the main texture types
(especially for texture packs), including palettes, but is otherwise
rather unsophisticated code. Needs a lot of work, but testing first.
Cleans up global space and makes it usable in multiple contexts. Also,
max quads dropped to 32k as each frame now has its own vertex buffer to
avoid issues with vertex overwrites (which I have seen). However, all
vertex buffers are in the one memory/buffer object (using offsets) and
the index buffer has been moved into a device-local memory object.
I think I did two as a bit of a ring buffer, but the new ring buffer
system used inside a staging buffer makes it less necessary. Also, the
staging buffer is now a fair bit bigger (4M is probably not really
enough)
This allows the array in which the command buffers are allocated to be
allocated on the stack using alloca and thus remove the need to
malloc/free of relatively small chunks.
The scrap texture did very good things for the glsl renderer and the
better control over data copying might help it do even better things for
vulkan, especially with lots of little icons.
It's never actually used (the texture can be fetched using
GLSL_ScrapTexture) and gets in the way of sharing the scrap system with
the vulkan renderer.
r_screen because of SCR_UpdateScreen, and r_cvar because the cvars
really should never have been in a plugin in the first place (and
r_screen needed access).
First pixels! This was a nightmare of little issues that the validation
layers couldn't help with: incorrect input assembly, incorrect vertex
attribute specs. Though the layers did help with getting the queues
working. Still, lots of work to go but this is a major breakthrough as
I now have access to visual debugging for textures and the like.
Short wrappers for Draw functins are in vid_render_vulkan.c so the
vulkan context can be passed on to the actual functions. The 2D shaders
are set up similar to those in glsl, but with full 32-bit color (rgba)
support instead of paletted. However, the textures are not loaded yet,
nor is anything bound.
The prototypes for handle parsers needed to be changes because it turned
out "single" was inappropriate for handles as "single" allocates memory
for the parsed object, but handles must be written directly.
The way I wound up using the field meant that exprctx should not "own"
the hashlinks chain, but rather just point to it. This fixes the nasty
access errors I had.
Dependencies on vkparse.hinc were spreading through the code which I
didn't want as that removes a lot of the automation from the automake
files. This keeps all parser code internal to vkparse.c's scope, and any
accesses required for enum and struct (not yet) definitions can be
fetched by name.
I want to be able to use name references, but that requires string
items, so anything that would normally be dictionary or array (or
binary, even) would also need to accept string. This seemed to be the
cleanest solution. Any custom parser would then need to check the type
and act appropriately, but any inappropriate types have already been
pre-filtered by the standard parsers.
Care needs to be taken to ensure the right function is used with the
right arguments, but with these, the need to use qconj(d|f) for a
one-off inverse rotation is removed.
They're binned by powers of two (with in between sizes going to the
smaller bin should I make cache-line allocations NPOT (which I think
might be worthwhile). However, there seems to still be a bug somewhere
causing a nasty leak as now my hacked qfvis consumes 40G in less than a
minute.
The idea is to not search through blocks for an available allocation.
While the goal was to speed up allocation of cache lines of varying
cluster sizes, it's not enough due to fragmentation.
They take advantage of gcc's vector_size attribute and so only cross,
dot, qmul, qvmul and qrot (create rotation quaternion from two vectors)
are needed at this stage as basic (per-component) math is supported
natively by gcc.
The provided functions work on horizontal (array-of-structs) data, ie a
vec4d_t or vec4f_t represents a single vector, or traditional vector
layout. Vertical layout (struct-of-arrays) does not need any special
functions as the regular math can be used to operate on four vectors at
a time.
Functions are provided for loading a vec4 from a vec3 (4th element set
to 0) and storing a vec4 into a vec3 (discarding the 4th element).
With this, QF will require AVX2 support (needed for vec4d_t). Without
support for doubles, SSE is possible, but may not be worthwhile for
horizontal data.
Fused-multiply-add is NOT used because it alters the results between
unoptimized and optimized code, resulting in -mfma really meaning
-mfast-math-anyway. I really do not want to have to debug issues that
occur only in optimized code.
The static variable meant that Fog_GetColor was not thread-safe (though
multiple calls in the one thread look to be ok for now). However, this
change takes it one step closer to being more generally usable.
Patch found in an old stash.
Shaders can be built as spv files and installed into
$libdir/quakeforge/shaders or as spvc files and compiled into the
engine. Loading supports $builtin/name to access builtin shaders,
$shader/path to access external standard shaders and quake filesystem
access for all other paths.
I had forgotten that msaa samples was governed by the driver (as a max)
and the renderpass setup code simply took the max. Thus why 1 vs 8
caused the display to render incorrectly.
It is capable of parsing single expressions with fairly simple
operations. It current supports ints, enums, cvars and (external) data
structs. It is also thread-safe (in theory, needs proper testing) and
the memory it uses can be mass-freed.
This was inspired by
Hoard: A Scalable Memory Allocator
for Multithreaded Applications
Emery D. Berger, Kathryn S. McKinley, Robert D. Blumofe, Paul R.
Wilson,
It's not anywhere near the same implementation, but it did take a few
basic concepts. The idea is twofold:
1) A pool of memory from which blocks can be allocated and then freed
en-mass and is fairly efficient for small (4-16 byte) blocks
2) Tread safety for use with the Vulkan renderer (and any other
multi-threaded tasks).
However, based on the Hoard paper, small allocations are cache-line
aligned. On top of that, larger allocations are page aligned.
I suspect it would help qfvis somewhat if I ever get around to tweaking
qfvis to use cmem.
The calculation fails (produces NaN) if the vectors are anti-parallel,
but works for all other combinations. I came up with this implementation
when I discovered Unity's Quaternion.FromToRotation could did not work
with very small angles. This implementation will produce a usable
quaternion below 0.00255 degrees (though it will be slightly larger than
unit). Unity's failed such that I could see KSP's skybox snap while it
rotated around my test vessel.
PL_ParseDictionary itself does only one level, but it takes care of the
key-field mappings and property list item type checking leaving the
actual parsing to a helper specified by the field. That helper is free
to call PL_ParseDictionary recursively.
The first line of the parsed item is stored and can be retrieved using
PL_Line. Line numbers not stored for dictionary keys yet. Will be 0 for
any items generated by code rather than parsed from a file or string.
There's still some cleanup to do, but everything seems to be working
nicely: `make -j` works, `make distcheck` passes. There is probably
plenty of bitrot in the package directories (RPM, debian), though.
The vc project files have been removed since those versions are way out
of date and quakeforge is pretty much dependent on gcc now anyway.
Most of the old Makefile.am files are now Makemodule.am. This should
allow for new Makefile.am files that allow local building (to be added
on an as-needed bases). The current remaining Makefile.am files are for
standalone sub-projects.a
The installable bins are currently built in the top-level build
directory. This may change if the clutter gets to be too much.
While this does make a noticeable difference in build times, the main
reason for the switch was to take care of the growing dependency issues:
now it's possible to build tools for code generation (eg, using qfcc and
ruamoko programs for code-gen).
This should keep things nicely extensible, since additional data can be
done in the data space and found using defs. This gets the compilation
units into the sym file.
The compilation unit stores the directory from which qfcc was run and
any source files mentioned. This is similar to dwarf's compilation unit.
Right now, this is the only data in the new debug space, but more might
come in the future so it seems best to treat the debug space separately
in the object files.
I decided that stopping in between function calls that are on the same
line is a good thing as it gives a chance to skip over the first but
step into the second.
This allows a debugger to do any symbol lookups and other preparations
between loading progs and the first code execution. .ctors are called as
per normal if debug_handler is not set.
In testing variable fw/precision in PR_Sprintf, I got a nasty reminder
of the limitations of the current progs ABI: passing @args to another QC
function does not work because the args list gets trampled but the
called function's locals. Thus, the need for a va_copy. It's not quite
the same as C's as it returns the destination args instead of copying
like memcpy, but it does copy the list from the source args to a
temporary buffer that is freed when the calling function returns.
When a type is aliased, the alias has two type chains: the simple type
chain with all other aliases stripped, and the full type chain. There
are still plenty of bugs in it, but having the clean type chain takes
care of the major issue that was in the previous attempt as only the
head of the type-chain needs to be skipped for type comparison.
Most of the bugs are in finding the locations where the head needs to be
skipped.
And rename prd_exit to prd_terminate (the idea is the host will
terminate the VM). This makes it possible for the debugger to pause the
VM before any code, even a builtin function, is executed. Breaks the
debugger source window, but only because it's not updating on file
change (I think).
I decided I want events for VM enter/exit but enter needs to somehow
pass the function which will be executed (even if a builtin). A generic
void * param seemed the best idea, which meant the error string could be
passed via the param instead of a "global" string in the progs struct.
Not sure when I'll use it, but as it avoids the null pointer check, it
should make for faster access to structs when reading or writing
multiple fields.
They take a pointer to a free-list used for hashlinks so the hashlink
pools can be per-thread. However, hash tables that are not updated are
always thread-safe, so this affects only updates. progs_t has been set
up such that it is easy for multiple progs within one thread can share
hashlinks.
While there was a breakpoint hook, it was for only breakpoints and more
was needed. Now there's a generic hook that is called for tracing,
breakpoints, watch points, runtime errors and VM errors, with the
"event" type passed as the first parameter and a data pointer in the
second.
This allows for the four combinations of shift and control. Not
bothering with alt because alt-f4 closes my xterm (bbkeys from the looks
of it: it grabs a bunch of Mod1-* keys).
I left the ones below 256 alone (some were necessary anyway), but above
256 they were almost entirely sequential, which rather defeats the
purpose of using an enum, especially since it makes it difficult to
expand sections.
The idea is to find th def that contains the address. Had to write my
own bsearch (well... lifted from wikipedia) because libc's is exact. The
defs are assumed to be sorted (which qfcc now ensures when it writes
progs and sym files).
Type encodings are used whenever they are available. For now, if they
are not, then everything is treated as void (which prints <void>, not
very useful). Most return statements and references to .return are now
very readable (excluding structs), and only params going through "..."
are a messy union.
The memset instructions now match the move* instructions other than the
first operand (always int). Probably breaks much, but fixed in next few
commits.
This "pushes" a temp string onto the callee's stack frame after removing
it from the caller's stack frame. This is so builtins can pass
auto-freed memory to called progs code. No checking is done, but mayhem
is likely to ensue if a string is pushed that was allocated in an
earlier frame.
PR_AllocTempBlock() works the same way as PR_SetTempString(), except
that it takes a size parameter and always allocates (never tries to
merge). This is, in a way, abusing the string system, but I needed a way
to allocate a block of progs memory that would be automatically freed
when the current frame ended. The biggest abuse is the need to cast away
the const of PR_GetString()'s return value.
This should speed up ruamoko code somewhat as hash table lookups have
been replaced with direct array indexing. As a bonus, support for
message forwarding has been added (though not tested).
As well as $prefix/include, of course. This fixes the problem with
external ruamoko builds failing due to keys.h and qfcc's "lockdown" on
system headers.
It proved to be too fragile in its current implementation. It broke
pointers to incomplete structs and switch enum checking, and getting it
to work for other things was overly invasive. I still want the encoding,
but need to come up with something more robust.a
Other than consistency with printf(), I'm not sure why we went with the
printed size as the return value; returning the resultant strings makes
much more sense as dsprintf() (etc) can then be used as a safe va()
Other than its blocking of access to certain files, it really wasn't
that useful compared to the functions in qfs, and pointless with access
to qfs anyway.
The progs execution code will call a breakpoint handler just before
executing an instruction with the flag set. This means there's no need
for the breakpoint handler to mess with execution state or even the
instruction in order to continue past the breakpoint.
The flag being set in a progs file is invalid.
The debug subsystem now uses the resources system to ensure it cleans
up, and its data is now semi-private. Unfortunately, PR_LoadDebug had to
remain public for qfprogs because using PR_RunLoadFuncs would cause
builtin resolution to complain.
It is now set to 0 when progs are loaded and every time
PR_ExecuteProgram() returns. This takes care of the default case, but
when setting parameters, pr_argc needs to be set correctly in case a
vararg function is called.
PR_SaveParams() is required for implementing the +initialize diversion
used by Objective-QuakeC because builtins do not have local def spaces
(of course, a normal stack calling convention would help). However, it
is entirely possible for a call to +initialize to trigger another call
to +initialize, thus the need for stacking parameter stashes. As a
bonus, this implementation cleans up some fields in progs_t.
I was originally going to put it in the debug syms file, but I realized
that the data persistence code would need access to both def type and
certainly correct def offsets for defs in far data.
This far better reflects the actual meaning. It is very likely that
ty_none is a holdover from long before there was full type encoding and
it meant that the union in qfcc's type_t had no data. This is still
true for basic types, but only if not a function, field or pointer type.
If the type was function, field or pointer, it was not true, so it was
misnamed pretty much from the start.
The engine now requires non-v6 progs to store the log2 alignment for the
param struct in .param_alignment.
PR_EnterFunction is clearer and possibly more efficient.
The encoding is 3:5 giving 3 bits for alignment (log2) and 5 bits for
size, with alignment in the 3 most significant bits. This keeps the
format backwards compatible as until doubles were added, all types were
aligned to 1 word which gets encoded as 0, and the size is unaffected.
Only as scalars, I still need to think about what to do for vectors and
quaternions due to param size issues. Also, doubles are not yet
guaranteed to be correctly aligned.
It turned out I needed access to the physical device from a buffer
object, so rather than storing the vulkan logical device directly in
buffer (and other) objects, store the qfv logical device.
I added Sys_RegisterShutdown years ago and never really did anything
with it: now any system that needs to be shutdown can ensure it gets
shutdown on program exit, and in the correct order (ie, reverse to init
order).
This paves the way for clean initialization of the Vulkan renderer, and
very much cleans up the older renderer initialization code as gl and sw
are no longer intertwined.
This fixes the segfault and pushes things very much in the desired
direction of proper system independence for rendering and presentation
separation (though things were headed in the right direction before).
Things are still a mess, but a proper cleanup will be a lot of work and
will, really, involve properly splitting quake-specific code* out from
the rest of the renderer.
* data loading and format specific stuff
A single graphics-capable queue should be enough for now. However, I'm
not sure I'm happy with a lot of the code: it's a bit difficult to write
flexibly configured code for Vulkan (or seems to be at this stage),
especially in C.
After messing with SIMD stuff for a little, I think I now understand why
the industry went with xyzw instead of the mathematical wxyz. Anyway, this
will make for less pain in the future (assuming I got everything).
I've decided that setting pr.max_edicts and pr.zone_size as part of the
local progs initialization rather than in PR_LoadProgsFile makes more
sense. For one, it is unlikely for the limits to change every time progs is
reloaded. Also, they seem to be a property of the VM rather than the progs.
However, there is nothing stopping the caller from updating max_edicts and
zone_size every call.
Also fix a bug where despite supporting 32 buttons, only 18 were actually
supported, and a similar issue for the number of axes.
My saitek x52 has 34 buttons and 10 axes. Whee.
_QFS_VOpenFile is actually _QFS_FOpenFile reimplemented to take vpath start
and end parameters so the search can be limited. QFS_VOpenFile,
_QFS_FOpenFile, and QFS_FOpenFile are all wrappers for _QFS_VOpenFile.
Again, based on The OpenGL Shader Wrangler. The wrangling part is not used
yet, but the shader compiler has been modified to take the built up shader.
Just to keep things compiling, a wrapper has been temporarily created.
The idea comes from The OpenGL Shader Wrangler
(http://prideout.net/blog/?p=11). Text files are broken up into chunks via
lines beginning with -- (^-- in regex). The chunks are optionally named
with tags of the form: [0-9A-Za-z._]+. Unnamed chunks cannot be found.
Searching for chunks looks for the longest tag that matches the beginning
of the search tag (eg, a chunk named "Vertex" will be found with a search
tag of "Vertex.foo"). Note that '.' forms the units for the searc
("Vertex.foo" will not find "Vertex.f").
Unlike glsw, this implementation does not have the concept of effects keys
as that will be separate. Also, this implementation takes strings rather
than file names (thus is more generally useful).
set_bits_t is now 64 bits for x86_64 machines (in linux, anyway). This gave
qfvis a huge speed boost: from ~815s to ~720s.
Also, expose some of the set internals so custom set operators can be
created.
Now we can get tight (<1e-6 * radius_squared error) bounding spheres. More
importantly (for qfvis, anyway) very quickly: 1.7Mspheres/second for a 5
point cloud on my 2.33GHz Core 2 :)
It "works" for lines, triangles and tetrahedrons. For lines and triangles,
it gives the barycentric coordinates of the perpendicular projection of the
point onto to features. Only tetrahedrons are guaranteed to reproduce the
original point.
Rather than prefixing free_ to the supplied name, suffix _freelist to the
supplied name. The biggest advantage of this is it allows the free-list to
be a structure member. It also cleans up the name-space a little.
I'd forgotten that ED_ConvertToPlist mangled light into light_lev and
single component angle values into a vector. This fixes much of the
breakage in qflight (but not the light levels)
Sys_LongTime returns time in microseconds as a 64-bit int. Sys_DoubleTime
uses Sys_LongTime, converts to double and offsets 0 time by 4G (2**32).
This gives us consistent sub-microsecond precision for a very long time.
See http://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/
First, this completely smashes joystick input: it will not work (though it
doesn't crash). This is because there is, as of yet, no means to configure
the system.
Each joystick axis has:
- per-axis amplification (both pre and post).
- per-axis offset (offset applied after pre-amp but before post amp)
- selectable destination:
- linear delta: position and angles (as before)
- axis button: if the value crosses the threshold, the given key is
pressed or released as appropriate.
The axis amplification still uses joy_amp and joy_pre_amp (and
in_amp/in_pre_amp), but now also has the per-axis settings.
The per-axis offset is most useful for axis buttons. For example, the xbox
360 controller triggers are analong but go "all the way to negative on 0
state". Offsetting the input keeps axis button thresholds simple.
Amplification and offset is applied before anything is done with the axis
value. The formula is:
joy_amp * in_amp * axis-amp *
(offset + value * joy_pre_amp * in_pre_amp * axis-pre_amp)
Axis button thresholds are very simple: if the sign of the value is the
same as the sign of the threshold and abs(value) >= abs(threshold), the
button is pressed. While multiple thresholds and keys can be placed on an
axis, only one can be pressed at a time. The threshold furthest from 0
wins.
This gives QF a consistent qualilty PRNG on all platforms. The
implementation is slightly different from the standard, but gives the same
results for the same speed (details in mersenne.c).
Now the user can create and destroy IMTs at will, though currently
destroying IMTs is currently all or nothing (imt_drop_all).
An IMT is created via imt_create which takes the keydest name (key_game
etc), the name of the IMT (must be unique for all IMTs) and optionally the
name of the IMT to which the key binding search will fall back if there is
no binding in the current IMT, but must be already defined and on the same
keydest. This means that IMTs now have user determined fallback paths. The
requirements for the fallback IMT prevent loops and other weird behaviour.
Actual key binding via in_bind is unaffected. This is why the IMT name must
be unique across all IMTs.
The "imt" command works with the key_game keydest, but imt_keydest is
provided for specifying the active IMT for a specific keydest.
At startup, default IMTs are setup to emulate the previous static IMTs so
old configs will continue to work (mostly). New config files will be
written with commands to drop all of the current IMTs and build new ones,
with the bindings and active IMT set as well.
This fixes the status bar refresh issues in sw. The problem was that with
two viddef's hanging around, things got a little confused and recalc_refdef
wasn't getting into the renderer.
It turns out gcc on little endian machines didn't guarantee the type of
ShortNoSwap due to it being a macro that just returned its parameter. At
the same time, LongNoSwap and FloatNoSwap have been fixed.
qfcc now does local common subexpression elimination. It seems to work, but
is optional (default off): use -O to enable. Also, uninitialized variable
detection is finally back :)
The progs engine now has very basic valgrind-like functionality for
checking pointer accesses. Enable with pr_boundscheck 2
Getting everything right with an enum proved to be too difficult if not
impossible. Also use better tests for equivalence and intersection.
Many more tests have been added. All pass :)
Also move the ALLOC/FREE macros from qfcc.h to QF/alloc.h (needed to for
set.c).
Both modules are more generally useful than just for qfcc (eg, set
builtins for ruamoko).
The depth limits in the gl and glsl renderers and in the trace code really
bothered me, but then the fix hit me: at load-time, recurse the trees
normally and record the depth in the appropriate place. The node stacks can
then be allocated as necessary (I chose to add a paranoia buffer of 2, but
I expect the maximum depth will rarely be used).
The attached patch (against quakeforge git) changes the [con]width,
[con]height, and most importantly the rowbytes members of viddef_t
from unsigned to signed int, like in q2. This allows for a properly
negative vid.rowbytes which may be needed in, e.g. a DIB sections
windows driver if needed. Along with it, I changed a few places
where unsigned int is used along with comparisons against the relevant
vid.* members.
One thing I am not 100% sure is the signedness requirements of
d_zrowbytes and d_zwidth: q2 has them as unsigned but I am not sure
whether that is because they are needed as unsigned or it was just an
oversight of the id developers. They do look like they should be OK
as signed int to me, though: comments?
==
Note from Bill Currie: I had to do some extra changes as many
signed/unsigned comparisons were somehow missed.
All of the nastiness is hidden in bspfile.c (including the old bsp29
specific data types). However, the conversions between bsp29 and bsp2 are
implemented but not yet hooked up properly. This commit just gets the data
structures in place and the obvious changes necessary to the rest of the
engine to get it to compile, plus a few obvious "make it work" changes.
This should make maintaining them a little easier.
The copyright block in most of the new headers (execpt vector.h) reflect
when the functions in the relevant header were first created.
Really, when cl_nodelta is in effect (eg, .qwd demo recording and thus
playback). QW now uses the new shared entity state block as I'd intended.
Thanks to the cleanup of ghost entities (ie, entities that have been
removed but continue to be rendered), glsl overkill has gone from 157 to
163 fps :)