After getting in contact with serplord, I now know that the sw alias
loading was correct. Turns out the gl loaders was mostly correct, just a
mistaken subtract rather than add. And with that, I can implement alias-16
support in glsl. better yet, since all the work is done in the loader, the
renderer doesn't know anything about it :) However, I need to create some
16-bit models for testing.
Not all hardware can access a texture sampler from the vertex shader, and I
don't want multiple paths this early in the game. Now, vertex normals are
uploaded as shorts. Should be 14 bytes per vertex (was 10, could have been
8 if I had put the normal index with the vertex rather than st).
This uses the same calculations as the software renderer. However, as the
lightmap calculations are not done yet, if there is no dlight in the
vicinity, there will be no light. demo1 is pretty cool to watch :)
While the first frame was fine, any subsequent ones were not. I had
forgotten that hdr->poseverts held the edited vertex count rather than
hdr->mdl.numverts.
GL Quake was weird, culling front faces. Partly understandable, since
Quake's front order is clockwise and GL's default order is
counter-clockwise. However, since the order can be specified, that should
be done instead. Thus, specify the winding order as clockwise (for quake's
data), set culling for back-face removal, and then mess with the winding
direction in the mirror and fish-eye code.
Vertex locations need to be unsigned byte rather than byte (GL is funy
with that). s and t need to be at least short, and since the normal index
is embedded in the st vector, it needs to be the same type. With this, my
test tetrahedrons seem to be working.
The plan is to do interpolated sprites based on a discussion I had with
LordHavoc about them: blending between the two sprite frames. He didn't
mention it, but it looks like blending between the sprite frames' verts is
necessary too.
Since I'm not specifying the api when creating my context, mesa is giving
me GL2. This is fair enough, but in GL2, vertex attribute 0 is the vertex
position. This too, is fair enough. The problem is, mesa is assigning 0 to
my vcolor attribute and thus I can't set it. The work around is simply to
swap the declaration order of vcolor and vertex (this really shouldn't work
eiter, I suspect).
textbox, crosshair, tileclear, fill and fadescreen are the remaining
functions.
There's a problem with 2d icons being drawn over the text (instead of
under), but I'll leave that for later.
Add GLSL directory to QF and use it for all gsls based code (instead of
GL). defines.h, types.h and funcs.h are derived from gl2.h from khronos.
Text drawing now uses triangles instead of quads.
This means that nq-glslx can get through demo1, demo2, demo3 and bigass1
without crashing. However, nothing is rendered, so unless you like black,
it's not very interesting.
The links are now in "instance surfaces". For non-instanced models (world,
doors, plats etc (ie, world and its sub-models)), there will be one
instance surface per model surface. However, for instanced models (ammo
boxes etc), there will be many, dynamically allocated (not yet
implemented). This commit gets the static instance surfaces working.
This has several benifits:
o The silly issue with alias model pitches being backwards is kept out
of the renderer (it's a quakec thing: entites do their pitch
backwards, but originally, only alias models were rotated. Hipnotic
did brush entity rotations in the correct direction).
o Angle to frame vector conversions are done only when the entity's
angles vector changes, rather than every frame. This avoids a lot of
unnecessary trig function calls.
o Once transformed, an entity's frame vectors are always available.
However, the vectors are left handed rather than right handed (ie,
forward/left/up instead of forward/right/up): just a matter of
watching the sign. This avoids even more trig calls (flag models in
qw).
o This paves the way for merging brush entity surface rendering with the
world model surface rendering (the actual goal of this patch).
o This also paves the way for using quaternions to represent entity
orientation, as that would be a protocol change.
It turns out that due to the way we do fullbrights, nothing special needs
to be done to get the fullbright texture blended with the model even when
fog is enabled.
I got rather tired of there being multiple definitions of mostly compatible
plane types (and I need a common type anyway). dplane_t still exists for
now because I want to be careful when messing with the actual bsp format.
Move the texture coordinates in 1/4 of a pixel. To avoid unnecessary
calculations, pre-caclulate the character cell texture coordinates and
blast them into the the texture coordinate array.