Bounds are optional for animated IQM models but are not possible to
include with unanimated models (seems intended for use with separate
model containing animations and bounds). Calculating bounds for
unanimated IQM models fixes culling and head model on HUD which
calculates position from model bounds.
The axis returned for IQM tag was the animation's joint rotation without
the base frame joint rotation. It only worked correct for models that
did not rotate the base frame joints.
Using GPU vertex skinning is significantly faster than CPU vertex
skinning. Especially since OpenGL2 has to run R_VaoPackNormal() and
R_VaoPackTangent() each vertex each frame which causes CPU vertex
skinning to be significantly slower than OpenGL1 renderer.
Only calculate vertex blend matrix for each unique bone indexes/weights
combination once per-surface instead of recalculating for each vertex.
For best performance the model surfaces needs to use few vertex bone
indexes and weights combinations.
Unroll loops so GCC better optimizes them.
In my tests drawing animated IQM may take 50% as long in opengl1 and
70% as long in opengl2. It will vary by model though and might not
help much at all.
Made unanimated IQM models skip matrix math altogether.
- Only allocate memory for vertex arrays that are present in the IQM
file and are actually used (may not have colors or blend index/weights,
don't load tangents in opengl1). (Colors is fixed to next commit.)
- Explicitly handle loading IQM files without meshes (bones only).
- Better IQM validation. Header data offset 0 mean data is not present
in file. Check if required vertex arrays are present.
This involved a lot of white space changes and moving code around.
Fix two constants in GLSL shaders. Remove f suffix from float and fix
int to float assignment. They were causing shader compile errors in
OpenGL ES 2 context.
Remove disabling clip plane. Clip plane is unused and never enabled in
the opengl2 renderer. Remove disabling it to avoid causing a GL error
when using OpenGL 3.2 core profile or OpenGL ES.
Make VAO cache vertex stride be size of srfVert_t since that is what
is uploaded to the GPU. No behavior change. There is a disabled debug
id in srfVert_t though which if enabled changes srfVert_t size.
OpenGL ES is only required to support unsigned short for element buffer
values.
R_DrawElements() firstIndex argument was glIndex_t which caused element
indexes to wrap around to 0 when glIndex_t is an unsigned short.
(glIndex_t is an index into the vertexes buffer, not element buffer.)
Change it to 'int' like tess.firstIndex which is passed to
R_DrawElements().
World VAO cache buffer size allowed storing more vertexes than unsigned
short glIndex_t could reference. This resulted in the vertex indexes in
the element buffer wrapping around to 0.
Load functions procs supported by OpenGL ES 2.0, though there is not a
compatible renderer yet. Change argument for GLimp_Init from coreContext
to fixedFunction.
Also declare the GL functions in tr_local.h so there is compile error
for non-core GL functions instead of SEGFAULT from dereferencing a NULL
pointer.
Disable the non-functional stencil shadow code that hasn't been updated
to use OpenGL 3.2 core compatible drawing.
If renderer is compiled into client (USE_RENDERER_DLOPEN=0) and after
start up set r_allowExtension to 0 and run vid_restart, some extension
were still used.
On Windows the opaque textures on the holodeck doors model were drawn
solid black and did not fade out.
Set up ref entity lightDir and modelLightDir as unit vectors. I am
guessing the vector lengths are being used as a divisor in a GLSL shader
or something.
Also clear directedLight so the same fields are set for RF_FULLBRIGHT as
all other ref entities.
Moved all the code using Altivec intrinsics to separate files. This
means we can optionally use GCC's -maltivec on just these files, which
are chosen at runtime if the CPU supports Altivec, and compile the rest
without it, making a single binary that has Altivec optimizations but
can still work on G3.
Unlike SSE and similar extensions on x86, there does not seem to be
a way to enable conditional, targeted use of Altivec based on runtime
detection (which is what ioquake3 wants to do) without also giving the
compiler permission to use Altivec in code generation; so to not crash
on CPUs that do not implement Altivec, we'll have to turn it off
altogether, except in translation units that are only entered when
runtime Altivec detection is successful.
This has been tested on Linux PPC (on an Altivec-enabled CPU),
but we may need further work after testing trickles out to other
PowerPC devices and ancient Mac OS X builds.
I did a little work on this patch, but the majority of the effort belongs
to Simon McVittie (thanks!).
Fix specularScale <metallic> <smoothness> with r_pbr 1 which has been
broken since r_pbr was implemented in 2016.
Fix specularScale <r> <g> <b> <gloss> setting b to gloss and leaving
gloss as 0 since it was implemented in 2014.
Team Arena's mpteam6 map has a shader textures/base_wall2/space_concrete
that contains an opaque stage, two non-lightmap blendfunc filter stages,
a blendfunc add stage, and a lightmap stage. The lightmap was attached
to all four of the non-lightmap stages causing the filter stages to
darken the lightmap multiple times.
Change setting up the lightall GLSL shader to only use lightmap if it's
the first stage or not a blendfunc filter stage. Now only the opaque
and blendfunc add stages of the mpteam6 shader use the lightmap.
Reported by Alexander Nadeau (wareya).
Use opengl1 renderer behavior of adding fixed amount of ambient light
to all models regardless of HDR setting. It fixes the view weapon
having zero ambient light on pillcity map.
The game world is too dark when r_autoExposure is disabled. It can be
fixed by setting (cheat) r_cameraExposure to 1 but then the game is
too bright when r_autoExposure is enabled. So default r_cameraExposure
to 1 and make auto exposure subtract 1 from r_cameraExposure value.
- Parse OpenGL version in sdl_glimp.c to share with both renderers.
- Add GL_VERSION_ATLEAST(major, minor) macro.
- Get address of glGetStringi if using OpenGL 3.
- Fix glConfig.extensions_string when using GL3 core context in
opengl2 renderer.
- Make opengl1 renderer's gfxinfo support qglGetStringi too.
Get all OpenGL functions using SDL_GL_GetProcAddress(). This makes it
easier to cross-arch compile on Linux and add support for OpenGL ES
in the future.
Users still have to supply their own libSDL2 for cross-arch compiling
on Linux. But now the user does not have to re-install libgl1-mesa-dev
package for i386 or amd64 on Debian when switching between compiling
ioquake3 for x86 and x86_64.
After 'Fix floating point precision loss in renderer', Windows x86
client won't load the renderer DLLs. The problem is a 64 bit modulus.
I couldn't find any reports of this online. However, client with
built-in renderer worked with the 64 bit modulus.
Only tested with mingw-w64.
Fix floatTime using float precision instead of double using GCC.
Fix R_BindAnimatedImage to be in sync with function table.
Fix vertexDeform bulge, vertexDeform normals, noise wave function
at high level time.
Revert unnecessary float -> double conversions.
Patch for https://bugzilla.icculus.org/show_bug.cgi?id=5931 by
Eugene C. from 2013 plus recent fix for tcMod rotate.
I merged the changes into the OpenGL2 renderer though the fix for
tcMod turb doesn't translate.
Models don't have a surface limit; skins shouldn't either. Some player
models require more than 32 surfaces since vanilla Quake 3 did not
enforce the limit.
Skins are now limited to 256 surfaces because having no limit would
require parsing the skin file twice. The skin surfaces are dynamically
allocated so it doesn't increase memory usage when less surfaces
are used.