tarred, feathered, stabbed, shot, drawn, quartered, and then really hurt.
Such hardware requiring it should be incinerated as worthless garbage.
Yes, this means that we now change blending states often again. This may
recover much of the lost FPS people were having with certain cards and
drivers. Sorry guys, I didn't consider that card makers could be such
complete idiots.
On the plus side, all major bugs outstanding in the GL renderer should be
resolved excepting the banding on 3dfx cards. As soon as Mercury gets me
the documentation on the gamma ramp extension, I'll be using it (hint..)
This is your cue to merge my changes into the main tree taniwha.
No depth polys yet. Waterripple added. Version display while downloading
removed. gl_finish sortof removed (cvar needs to be pulled still),
gl_ztrick is next. I understand the GL renderer and what I plan to do w/
it better now, so I can start pounding away at that after I wake up.
Much better for clearing the screen and stuff, will get used more later
on. For now it just makes the sky's edge off in the distance and makes
the non-skybox sky more of a dome than a box with rounded off edges.
Visual bug: drawing everything this way leaves blending off, particles end
up being solid tris. All I gotta do is turn blend back on, no biggie.
Visual bug: skydome being in the distance creates the same see through
walls effect that skyboxes have. Not a problem since I know where to fix
that.
There's tons of dead code in here still, I'm probably going to move the
sky stuff into gl_sky.c and clean up LordHavoc's code a bit, I can do
the skybox at least cheaper than he does. We'll see about the dome.
wholesome, family project (yeah right), it will not be seen anymore.
But fret not. If you need a replacement, just use glBindTexture the way
SGI intended. In fact, every single GL_Bind (target) call was simply
replaced with glBindTexture (GL_TEXTURE_2D, target).. Since that's more
or less all GL_Bind () did anyway, save a function call!
make work properly:
Win32 thing.. If you don't free textures explicitly, you can cause a
problem with nVidia drivers.
Colored lighting is now RGB instead of RGBA. The alpha is kinda pointless
on a lightmap and the effect's not all that great. Plus people stuck with
16 bit OpenGL (any other 3dfx people out there?) will be quite pleased
with the improvement in image quality. This does include LordHavoc's
dynamic light optimization code which takes most of the pain out of having
gl_flashblend off.
All glColor*'s are now half of what they used to be, except where they
aren't. If that doesn't make sense, don't worry. If you see one that's
only half what you'd expect, don't worry---it probably is meant to be like
that.. (More below)
glDisable (GL_BLEND) is now a thing of the GL_PAST. As is GL_REPLACE.
Instead, we _always_ use GL_MODULATE and leave GL_BLEND turned on. This
seems at first like it might be a performance hit, but I swear it's much
more expensive to change blending modes and texture functions 20-30 times
every screen frame!
Win32 issue.. Even though we check for multitexture, we currently don't
use it. Reason is that I am planning to replace SGIS_multitexture with
the preferred ARB_multitexture extension which is supported in most GL 1.1
implementations and is a requirement for GL 1.2 anyway. I also wanted to
get rid of some duplicated code. Since Linux doesn't support multitexture
yet, I just commented out the code keeping me from compiling to get it to
work. Win32 should work without it until it's fixed, which shouldn't be
long since the differences between SGIS and ARB multitextures as far as
Quake is concerned is minimal AT BEST.
LordHavoc and I have been working tirelessly (well not quite, we both did
manage to sleep sometime during this ordeal) to fix the lighting in the GL
renderers! It looks DAMNED CLOSE to software's lighting now, including
the ability to overbright a color. You've gotta see this to know what I'm
talking about. That's why the glColor*'s are halved in most places. The
gamma table code and the general way it works is LordHavoc's design, but
over the course of re-implementing it in QF we did come up with a few more
small optimizations.
A lot of people have noticed that QF's fps count has gone to shit lately.
No promises that this undid whatever the problem was. That means there
could be a huge optimization lurking somewhere in the shadows, waiting for
us to fix it for a massive FPS boost. Even if there's not, the code in
this commit DOUBLED MY FPS COUNT. Granted I was getting pathetic FPS as
it was (around 30, which is pathetic even for a Voodoo3 in Linux) but
still---60 is a big improvement over 30!
Please be sure to "test" this code thuroughly.
In order to do so I:
* included strings.h and string.h in many files so various functions would be
defined
* Fixed model_t collision problem in cl_main.c (Solaris)
* com.c - corrected WORDS_BIGENDIAN spelling
* gl_draw.c - Use HAVE_GL_COLOR_INDEX8_EXT to avoid referencing
GL_COLOR_INDEX8_EXT when it isn't available
* net_udp.c - use socklen_t to appease AIX
little problem of mixed QFile and FILE. Since we're not using ZLib in
this tree, QFile makes no real sense. That didn't fix the real problem
I am having though.
split up the headerfiles and such. common.[ch] and qwsvdef.h no longer exist. More work still needs to be done (esp for windows) but this should be a major improvement.