It now takes a context pointer (opaque data) that holds the buffers it
uses for the temporary strings. If the context pointer is null, a static
context is used (making those uses of va NOT thread-safe). Most calls to
va use the static context, but all such calls have been formatted
consistently so they are easy to find when it comes time to do a full
audit.
I had messed up my index array creation, but once that was fixed the
textures worked well other than a lot of pixels are shades of grey due
to being in the top or bottom color map range.
The dynamic array macros made this much easier than last time I looked
at it, especially when it came to figuring out the bad memory accesses
that I seem to remember from my last attempt 9 years ago.
This makes tex_t more generally useable and probably more portable. The
goal was to be able to use tex_t with data that is in a separate chunk
of memory.
The sky texture is loaded with black's alpha set to 0. While this does
hit both layers, the screen is cleared to black so it shouldn't be a
problem (and will allow having a skybox behind the sheets).
It now uses the ring buffer code I wrote for qwaq (and forgot about,
oops) to handle the packets themselves, and the logic for allocating and
freeing space from the buffer is a bit simpler and seems to be more
reliable. The automated test is a bit of a joke now, though, but coming
up with good tests for it... However, nq now cycles through the demos
without obvious issue under the same conditions that caused the light
map update code to segfault.
Copying data from the wrong buffer was the cause of the corrupted brush
model vertices, and then lots of little errors (mostly forgetting to
multiply by bpp) for textures.
This fixes a nine year old bug that I discovered only today thanks to
the vulkan renderer. The problem was that when a model had a clear
callback, it was not getting marked as needing to be reloaded, and thus
the model would be "reused" after being trampled on by another model
loading over it.
Also, plug a potential string buffer overflow (strcpy just will not
die!).
This cleans up texture_t and possibly even improves locality of
reference when running through texture chains (not profiled, and not
actually the goal).
There's still some cleanup to do, but everything seems to be working
nicely: `make -j` works, `make distcheck` passes. There is probably
plenty of bitrot in the package directories (RPM, debian), though.
The vc project files have been removed since those versions are way out
of date and quakeforge is pretty much dependent on gcc now anyway.
Most of the old Makefile.am files are now Makemodule.am. This should
allow for new Makefile.am files that allow local building (to be added
on an as-needed bases). The current remaining Makefile.am files are for
standalone sub-projects.a
The installable bins are currently built in the top-level build
directory. This may change if the clutter gets to be too much.
While this does make a noticeable difference in build times, the main
reason for the switch was to take care of the growing dependency issues:
now it's possible to build tools for code generation (eg, using qfcc and
ruamoko programs for code-gen).
They take a pointer to a free-list used for hashlinks so the hashlink
pools can be per-thread. However, hash tables that are not updated are
always thread-safe, so this affects only updates. progs_t has been set
up such that it is easy for multiple progs within one thread can share
hashlinks.
After messing with SIMD stuff for a little, I think I now understand why
the industry went with xyzw instead of the mathematical wxyz. Anyway, this
will make for less pain in the future (assuming I got everything).
These are the ones where I could easily make scan-build happy. They do seem
to be potential holes where invalid data in one place could result in use
of uninitialized values.
While scan-build wasn't what I was looking for, it has proven useful
anyway: many of the sizeof errors were just noise, but a few were actual
bugs (allocating too much or too little memory).
This was caused by an out-by one error when setting up the skin: if cmap
was 0 then the gl_skin struct would be taken from index -1 of the array and
thus cause all sorts of grief.
GL sometimes crashes when building skins. This probably isn't the correct
fix (finding the situation where fb->tex can become NULL despite fb being
non-null is), but it does kill the segfault. Luckily, this is git and this
commit can just be reverted when the real fix shows up. :)
The search for these files will stop in the vpath that contains the .bsp
file to which they belong. This will prevent problems with
id1/maps/start.lit being used for shadows/maps/start.bsp.
Since the hull depth needs to be set for the hull to be useful, it makes
sense to move the code into the same place that allocates new hulls (to me,
anyway).
The depth limits in the gl and glsl renderers and in the trace code really
bothered me, but then the fix hit me: at load-time, recurse the trees
normally and record the depth in the appropriate place. The node stacks can
then be allocated as necessary (I chose to add a paranoia buffer of 2, but
I expect the maximum depth will rarely be used).
This fixes the horribly different results between optimized and unoptimized
qfbsp (there is still a difference of 1 brushface). Unfortunately, it also
severely limits the maximum size of a map.
All of the nastiness is hidden in bspfile.c (including the old bsp29
specific data types). However, the conversions between bsp29 and bsp2 are
implemented but not yet hooked up properly. This commit just gets the data
structures in place and the obvious changes necessary to the rest of the
engine to get it to compile, plus a few obvious "make it work" changes.
In the end, it turns out this is the correct fix for the gl seg on
overkill, because build_skin will correctly use the fully setup player skin
if the glskin doesn't have a texture associated with it.
Double converting texcoords results in 0,0 for all affected texcoords. Mr
Fixit was looking rather ill. Now he looks weird (something wrong in the
renderer?).
There are still many issues to sort out, but the basics are working.
Problems:
rendered fullbright (no lighting done)
normals are ignored
extra textures (glow etc) not used/loaded
4 models on the screen don't seem to be a problem.
Since iqm vertex arrays are variable, and I don't want to calculate the
stride every time I render a model, cached the value used when building the
arrays.
Still, nothing will work: no plugins are loaded and they're all broken
anyway.
glx, sgl, glslx etc are going away, just the basics will be built: fbdev
(probably go away eventually), sdl, x11 and hopefully someday win. That's
actually the only reason anything links.
If the client receives a skin updated message from the server before having
loaded the player model (shouldn't happen, but some servers have very
strange programmers), no skin data is avaible for updating, so just bail
out.
Where possible, symbols have been made static, prefixed with glsl_/GLSL_ or
moved into the code shared by all renderers. This will make doing plugins
easier but done now for link testing. The moving was done via the gl
commit.
Where possible, symbols have been made static, prefixed with gl_/GL_ or
moved into the code shared by all renderers. This will make doing plugins
easier but done now for link testing.
o All instances of LIBADD/LDADD have a corresponding DEPENDENCIES
specificatiion.
o libraries now use a lib_ldflags macro to keep things consistent
o duplication of source/lib names has been minimized (particularly in
the libraries; more work needs to be done for the executables)
o automake spec blocks have been organized (again, more work needs to be
done for the executables)