This severely reduces the calles to BindTexture, and more importantly,
glUseProgram, EnableVertexAttribArray etc. The biggest changes are:
o icons and text are all in the one giant texture
o icons and text are mixed in the one queue
This gave ~9% speedup for bigass1 (159->174fps).
I didn't like the way client/server code was poking around at the
implementation. Instead, provide a couple of accessor functions for the
same information.
There are still many issues to sort out, but the basics are working.
Problems:
rendered fullbright (no lighting done)
normals are ignored
extra textures (glow etc) not used/loaded
4 models on the screen don't seem to be a problem.
Since iqm vertex arrays are variable, and I don't want to calculate the
stride every time I render a model, cached the value used when building the
arrays.
VectorUnshear uses the exact same shear vector to remove shear from a
sheared vector. ie with:
VectorShear (shear, v, w);
VectorUnshear (shear, w, x);
x == v within fp math limits.
And the tests really exercised VectorShear (first attempt had things
messed up when more than one shear value was non-zero). Also,
Mat4Decompose wasn't orthogonalizing the z axis row. Oops. Anyway,
Mat4Decompose is now known to work well, and the usage of its output is
understood :)
I'd gotten the norm and magnitude mixed up (partly because the document I
was following got the names mixed up), and then munged the formulas
together.
I got the idea from blender when I discovered by accident that quat * vect
produces the same result as quat * qvect * quat* and looked up the code to
check what was going on. While matrix/vector multiplication still beats the
pants off quaternion/vector multiplication, QuatMultVec is a slight
optimization over quat * qvect * quat* (17+,24* vs 24+,32*, plus no need to
to generate quat*).
This avoids sending invalid pose data to the renderer. The symptom was a
vertex array offset higher than the vertex array size. Discovered by calim
of nouveau while he was debugging a driver problem found by QF. Many
thanks.