This fixes the status bar refresh issues in sw. The problem was that with
two viddef's hanging around, things got a little confused and recalc_refdef
wasn't getting into the renderer.
It turns out gcc on little endian machines didn't guarantee the type of
ShortNoSwap due to it being a macro that just returned its parameter. At
the same time, LongNoSwap and FloatNoSwap have been fixed.
qfcc now does local common subexpression elimination. It seems to work, but
is optional (default off): use -O to enable. Also, uninitialized variable
detection is finally back :)
The progs engine now has very basic valgrind-like functionality for
checking pointer accesses. Enable with pr_boundscheck 2
Getting everything right with an enum proved to be too difficult if not
impossible. Also use better tests for equivalence and intersection.
Many more tests have been added. All pass :)
Also move the ALLOC/FREE macros from qfcc.h to QF/alloc.h (needed to for
set.c).
Both modules are more generally useful than just for qfcc (eg, set
builtins for ruamoko).
The depth limits in the gl and glsl renderers and in the trace code really
bothered me, but then the fix hit me: at load-time, recurse the trees
normally and record the depth in the appropriate place. The node stacks can
then be allocated as necessary (I chose to add a paranoia buffer of 2, but
I expect the maximum depth will rarely be used).
The attached patch (against quakeforge git) changes the [con]width,
[con]height, and most importantly the rowbytes members of viddef_t
from unsigned to signed int, like in q2. This allows for a properly
negative vid.rowbytes which may be needed in, e.g. a DIB sections
windows driver if needed. Along with it, I changed a few places
where unsigned int is used along with comparisons against the relevant
vid.* members.
One thing I am not 100% sure is the signedness requirements of
d_zrowbytes and d_zwidth: q2 has them as unsigned but I am not sure
whether that is because they are needed as unsigned or it was just an
oversight of the id developers. They do look like they should be OK
as signed int to me, though: comments?
==
Note from Bill Currie: I had to do some extra changes as many
signed/unsigned comparisons were somehow missed.
All of the nastiness is hidden in bspfile.c (including the old bsp29
specific data types). However, the conversions between bsp29 and bsp2 are
implemented but not yet hooked up properly. This commit just gets the data
structures in place and the obvious changes necessary to the rest of the
engine to get it to compile, plus a few obvious "make it work" changes.
This should make maintaining them a little easier.
The copyright block in most of the new headers (execpt vector.h) reflect
when the functions in the relevant header were first created.
Really, when cl_nodelta is in effect (eg, .qwd demo recording and thus
playback). QW now uses the new shared entity state block as I'd intended.
Thanks to the cleanup of ghost entities (ie, entities that have been
removed but continue to be rendered), glsl overkill has gone from 157 to
163 fps :)
It turns out glsl, sw and sw32 weren't getting any benefit from R_CullBox
because the frustum wasn't setup :P. Get another 8% out of bigass1
(174->184fps). bigass1 now runs 2x as fast as it did before I started this
optimisation run :)
This severely reduces the calles to BindTexture, and more importantly,
glUseProgram, EnableVertexAttribArray etc. The biggest changes are:
o icons and text are all in the one giant texture
o icons and text are mixed in the one queue
This gave ~9% speedup for bigass1 (159->174fps).
For certain values of "fix" ;). Both are brought back to life but
idealpitch is never set (always 0) and veiwheight is set in V_RenderView().
However, this brings the rest of the code in cl_view.c just that little bit
closer to merged :)
I didn't like the way client/server code was poking around at the
implementation. Instead, provide a couple of accessor functions for the
same information.
gl, sw and sw32 use blend palettes, so share the code. This also abandons
the optimization for transforming verts in sw (had all sorts of problems
anyway). sw still doesn't work, though.
There are still many issues to sort out, but the basics are working.
Problems:
rendered fullbright (no lighting done)
normals are ignored
extra textures (glow etc) not used/loaded
4 models on the screen don't seem to be a problem.
Since iqm vertex arrays are variable, and I don't want to calculate the
stride every time I render a model, cached the value used when building the
arrays.
VectorUnshear uses the exact same shear vector to remove shear from a
sheared vector. ie with:
VectorShear (shear, v, w);
VectorUnshear (shear, w, x);
x == v within fp math limits.
And the tests really exercised VectorShear (first attempt had things
messed up when more than one shear value was non-zero). Also,
Mat4Decompose wasn't orthogonalizing the z axis row. Oops. Anyway,
Mat4Decompose is now known to work well, and the usage of its output is
understood :)
I'd gotten the norm and magnitude mixed up (partly because the document I
was following got the names mixed up), and then munged the formulas
together.
Now it doesn't matter if you get 22 fps or 72, you jump the same height,
which actually happens to be slightly higher than the previous 72fps jump.
Effectively, you jump the height you would if you got infinite fps ;)
I got the idea from blender when I discovered by accident that quat * vect
produces the same result as quat * qvect * quat* and looked up the code to
check what was going on. While matrix/vector multiplication still beats the
pants off quaternion/vector multiplication, QuatMultVec is a slight
optimization over quat * qvect * quat* (17+,24* vs 24+,32*, plus no need to
to generate quat*).
This avoids sending invalid pose data to the renderer. The symptom was a
vertex array offset higher than the vertex array size. Discovered by calim
of nouveau while he was debugging a driver problem found by QF. Many
thanks.
This allows the vid module to load the render module and access render
specific functions before the renderer initializes, which happens to need
an initialized vid module...
The renderer now gets initialized and things sort of work (qw-client will
idle, though nothing is displayed). However, as the viddef stuff is broken,
it segs on trying to run the overkill demo.
Still, nothing will work: no plugins are loaded and they're all broken
anyway.
glx, sgl, glslx etc are going away, just the basics will be built: fbdev
(probably go away eventually), sdl, x11 and hopefully someday win. That's
actually the only reason anything links.
Where possible, symbols have been made static, prefixed with glsl_/GLSL_ or
moved into the code shared by all renderers. This will make doing plugins
easier but done now for link testing. The moving was done via the gl
commit.
Where possible, symbols have been made static, prefixed with gl_/GL_ or
moved into the code shared by all renderers. This will make doing plugins
easier but done now for link testing.
The api hides all the gory details of message buffer setup and usage
(particularly the differences between writing and reading). Most
importantly, the api provides a safe way to read and write binary data
(always little endian).
Most subsystems that depend on other subsystems now call the init functions
themselves. This makes for much cleaner client initialization (more work
needs to be done for the server).
The renderer should now be free of any direct access to client code. Even
3d rendering is now done via a function pointer.
The cshift code is done as a 2d screen function.
Unfortunately, the maximum point size on Intel hardwar seems to be 1, so I
can't tell if the colors are right.
This is largely just a hacked version of GL's particle code.
For now, only the glsl loader disables caching, but it stores the frame
vertices in GL memory, so its hunk usage is relatively lower (and will be
lower still when I get skins sorted out).
Unfortunately, the intel driver on my eeepc doesn't like the mipmas for
plat_top2 or +2floorsw. If I either don't load their mipmaps, or skip
drawing them, things seem to work nicely.
It turns out my complicated plan was just that: complicated. Although there
are currently some bugs, the method I used to build the VBO in the first
place will work equally well for building the index lists.
The entire vertex set from every model is put into one list (not yet
uploaded). chains of elements arrays are build for non-instanced models
(instanced models will have their chains built each frame).
Still nothing being rendered: still in the process of building the display
lists, but I'm making good progress. Get this into git before something
goes wrong :)
After getting in contact with serplord, I now know that the sw alias
loading was correct. Turns out the gl loaders was mostly correct, just a
mistaken subtract rather than add. And with that, I can implement alias-16
support in glsl. better yet, since all the work is done in the loader, the
renderer doesn't know anything about it :) However, I need to create some
16-bit models for testing.
Not all hardware can access a texture sampler from the vertex shader, and I
don't want multiple paths this early in the game. Now, vertex normals are
uploaded as shorts. Should be 14 bytes per vertex (was 10, could have been
8 if I had put the normal index with the vertex rather than st).
GL Quake was weird, culling front faces. Partly understandable, since
Quake's front order is clockwise and GL's default order is
counter-clockwise. However, since the order can be specified, that should
be done instead. Thus, specify the winding order as clockwise (for quake's
data), set culling for back-face removal, and then mess with the winding
direction in the mirror and fish-eye code.
Vertex locations need to be unsigned byte rather than byte (GL is funy
with that). s and t need to be at least short, and since the normal index
is embedded in the st vector, it needs to be the same type. With this, my
test tetrahedrons seem to be working.