Fix two constants in GLSL shaders. Remove f suffix from float and fix
int to float assignment. They were causing shader compile errors in
OpenGL ES 2 context.
Remove disabling clip plane. Clip plane is unused and never enabled in
the opengl2 renderer. Remove disabling it to avoid causing a GL error
when using OpenGL 3.2 core profile or OpenGL ES.
Make VAO cache vertex stride be size of srfVert_t since that is what
is uploaded to the GPU. No behavior change. There is a disabled debug
id in srfVert_t though which if enabled changes srfVert_t size.
OpenGL ES is only required to support unsigned short for element buffer
values.
R_DrawElements() firstIndex argument was glIndex_t which caused element
indexes to wrap around to 0 when glIndex_t is an unsigned short.
(glIndex_t is an index into the vertexes buffer, not element buffer.)
Change it to 'int' like tess.firstIndex which is passed to
R_DrawElements().
World VAO cache buffer size allowed storing more vertexes than unsigned
short glIndex_t could reference. This resulted in the vertex indexes in
the element buffer wrapping around to 0.
Load functions procs supported by OpenGL ES 2.0, though there is not a
compatible renderer yet. Change argument for GLimp_Init from coreContext
to fixedFunction.
Also declare the GL functions in tr_local.h so there is compile error
for non-core GL functions instead of SEGFAULT from dereferencing a NULL
pointer.
Disable the non-functional stencil shadow code that hasn't been updated
to use OpenGL 3.2 core compatible drawing.
If renderer is compiled into client (USE_RENDERER_DLOPEN=0) and after
start up set r_allowExtension to 0 and run vid_restart, some extension
were still used.
This is one of the de facto standard interfaces used in Linux
distributions for cross-compilation (alongside overriding CC and
similar variables), and in particular is used in Debian and its
derivatives.
Signed-off-by: Simon McVittie <smcv@debian.org>
Connecting to a server running a different fs_game and using a
autoexec.cfg containing in_restart would hit a fatal error in IN_Init().
IN_Init called before SDL_Init( SDL_INIT_VIDEO )
Reported by smokey2k on the ioquake3 forum.
Linux loki-setup uninstaller is failing on distros that have /bin/sh
linked to Dash instead of Bash. Use standard shell method for
redirecting stdout and stderr instead of a Bash specific method.
In 2013 ioquake3 stopped referencing the pk3 file that qagame.qvm was
loaded from. This had the unintended side affect of causing
non-dedicated pure servers to no longer reference a pk3 that only
contains the three QVM files.
Non-dedicated pure servers did not reference the pk3 containing the
latest cgame.qvm so if client did not have the pk3 file they were kicked
as unpure instead of the client trying to download the pk3 file.
Also make server touch ui.qvm since it's required to pass pure check and
may be separate from cgame.qvm.
Moved all the code using Altivec intrinsics to separate files. This
means we can optionally use GCC's -maltivec on just these files, which
are chosen at runtime if the CPU supports Altivec, and compile the rest
without it, making a single binary that has Altivec optimizations but
can still work on G3.
Unlike SSE and similar extensions on x86, there does not seem to be
a way to enable conditional, targeted use of Altivec based on runtime
detection (which is what ioquake3 wants to do) without also giving the
compiler permission to use Altivec in code generation; so to not crash
on CPUs that do not implement Altivec, we'll have to turn it off
altogether, except in translation units that are only entered when
runtime Altivec detection is successful.
This has been tested on Linux PPC (on an Altivec-enabled CPU),
but we may need further work after testing trickles out to other
PowerPC devices and ancient Mac OS X builds.
I did a little work on this patch, but the majority of the effort belongs
to Simon McVittie (thanks!).
When you start recording using SDL pulseaudio driver the client sends
all audio captured while not recording. 240 milliseconds of audio is
sent each frame until the capture buffer is empty. This is a problem for
privacy and causes confusing to debug VoIP playback issues on other
clients connected to server and when playing back demos.
Fix specularScale <metallic> <smoothness> with r_pbr 1 which has been
broken since r_pbr was implemented in 2016.
Fix specularScale <r> <g> <b> <gloss> setting b to gloss and leaving
gloss as 0 since it was implemented in 2014.
The key handler allowed going 2 beyond the end of the bot list and the
display function clamped to 0 causing the first bot to be shown 3 times.
Attempting to add the bot in gametypes < GT_TEAM would fallback to
Sarge in UI_GetBotNameByNumber() (who isn't the first bot) and gametypes
>= GT_TEAM would access characterList past known values (typically NULL
but if teaminfo.txt contained 63 characters it would access out of
bounds memory).
Switching to dedicated camera follower with no possible players to
follow would spawn at the intermission point and display "connection
interrupted" HUD message. Pmove() was not run for the client so
ps.commandTime was too far behind. I made it so that dedicated camera
followers and scoreboard run Pmove() but cannot move (PM_FREEZE).
When all players possible to follow leave, the dedicated camera follower
would continue to display the old player state of the player they were
following (along with "connection interrupted" HUD message). Unlike the
regular case of a spectator following a specific player, dedicated
camera followers did not reset their player state to the intermission
point after the followed player was no longer valid.
Now a client can be set as 'team follow1' to automatically switch
between displaying the intermission point and following a player when
possible.
SDL 2.0.5 dropped support for macOS 10.5 so target 10.6 instead. The
PPC build uses SDL 2.0.1 so it still targets 10.5. macOS 10.5 (x86,
x86_64) should automatically run the PPC build using Rosetta.
Revert MAN-AT-ARMS' change to SDL 2.0.8 SDL_platform.h that allowed
targeting macOS 10.5 for the sake of PPC. It also incorrectly allowed
x86 and x86_64 to target 10.5 as well. (Also macOS PPC uses separate
headers now.)
The version check is required for supporting macOS PPC with SDL 2.0.1
and Travis-CI (Ubuntu Trusty) with SDL 2.0.2.
The client now requires SDL 2.0.5 runtime if compiled against SDL 2.0.5
or newer.
code/libs/macosx/libSDL2-2.0.0.dylib has 2.0.8 for x86 and x86_64 and
2.0.1 for PPC. Add 2.0.1 headers for PPC with modifed SDL_platform.h to
allow compiling using macOS 10.5 SDK. Using separate headers allows the
engine to check the SDL version for enabling newer SDL features.
This is a little bit of future-proofing, but also gives us a little more
flexibility in general; now we can add in the cvars to open a specific
device, etc, that the OpenAL codepath does.
In SDL2, the initialized subsystems are referenced counted, so it's safe to
initialize them twice, and it makes the SDL_QuitSubSystem during our shutdown
correctly decrement the count. Before (as a probably-harmless bug), it would
not increment the refcount if the subsystem was already initialized, causing
problems when it decremented it later.
In September 2017 I moved loading arenas.txt/*.arena files from entering
start server menu to at startup to fix running out of memory in Team Arena
UI after opening the start server menu several times.
However, Team Arena completely replaces the uiInfo.mapList array when
switching between single player and start server menus. So after my
change, entering single player and then entering start server would only
display single player maps. It caused SP endofgame menu to use MP map
list for replay/next map since arenas were loaded after gameinfo.txt.
Continue loading arena info at start up to avoid reallocating arena info
but move setting up uiInfo.mapList to when entering the start server
menu.
I changed Color Depth options 'Default' to reset r_stencilbits instead
of 0 and '32 bit' to use r_stencilbits 8 instead of not changing the
value. (This matches my q3_ui changes in the previous commit.)
Set r_stencilbits when changing graphics presets like when changing
Color Depth.
In 2007 in ioquake3 unified-sdl branch (revision 1144) setting
r_colorbits in q3_ui was removed but the Color Depth menu option was
still kept. Setting r_colorbits was not removed from the Team Arena UI.
In 2011 I removed the Color Depth menu option from q3_ui as it did not
change any cvars. Yesterday I restored the option not realizing this
and thinking that requesting 16-bit color depth worked.
Add setting r_colorbits back to q3_ui so Color Depth menu option works
again. I changed Color Depth options 'Default' to reset r_stencilbits
instead of 0 and '32 bit' to use r_stencilbits 8 instead of not changing
the value.
However I discovered r_colorbits 16 does not actually work on my system
(Debian Jessie x86_64 nvidia). ioquake3 was reporting the requested
value instead of the actual obtained value. Fixed in my previous commit.