glStencilOpSeparateATI() should behave exactly the same as
glStencilOpSeparate() so supporting it is easy enough and might help
some people with hardware or drivers that don't support OpenGL 2.0,
like the Mac OSX versions for PPC.
- Fix build with SDL <=2.0.3
SDL_GetGlobalMouseState was introduced in 2.0.4
(which doesn't support OSX 10.5 or older)
- Don't include execinfo.h on Mac OS X 10.4
This file isn't included in the 10.4 SDK
- Use custom typedef for PFNGLSTENCILOPSEPARATEPROC on OSX 10.4/10.5
because the system OpenGL headers for those versions don't have it
CMAKE_SYSTEM_PROCESSOR used to be broken, CMake "fixed" it by redefining
its meaning (from "Target CPU" to "Host CPU except when crosscompiling").
On Windows it always prints the host CPU, on Linux it at least made trouble
in chroots and when running 64bit kernels with 32bit userlands (this used
to be not totally uncommon on x86 before distros completely switched to
amd64, and apparently Raspbian/Raspberry Pi OS does this on RPi4, see #267)
Thankfully gcc and clang support "-dumpmachine" to print their (default)
target system, so use that instead (MSVC already had a special case).
On the upside, this allows getting rid of the MinGW special case.
I hope this also works with Apple Clang..
The problem was that negative values (from dhewm3tmpres.xyz) were passed
to POW, and POW doesn't have to support negative bases, according to
ARB_fragment_program.txt, and Intels Linux drive apparently doesn't,
see also https://gitlab.freedesktop.org/mesa/mesa/-/issues/5131
Using MUL_SAT instead of MUL to clamp the value that gets passed to POW
afterwards to [0, 1] fixes the problem without any disadvantages.
so far the code assumed that "result.color" is always used directly,
but the ARB shaders allow creating an alias with the aforementioned
syntax. So turn that into an variable-alias for dhewm3tmpres.
as long as it's chars Doom3 supports, i.e. it can be converted
to ISO-8859-1
also renamed kbdNames to _in_kbdNames to reduce likelyhood of clashes
(as it can't be static)
and scale the breakpoint dots accordingly - now they don't looked all
squashed anymore.
I think ResizeImageList() is more correct now, at least this helped with
the breakpoint dots.
The `const char* filename` arg is passed from idProgram::CompileText(),
where it's from idProgram::filename - and that filename can get modified
in idCompiler::NextToken() when it calls gameLocal.program.GetFilenum()
and if the idStr grows and reallocates for that modification,
the filename pointer becomes invalid.
So store `filename` in an idStr and use that when logging the
compile time.
If in_ignoreConsoleKey is set, the console can only be opened with
Shift+Esc, not `/^/whatever, so you can easily type whatever character
is on your "console key" into the game, or even bind that key.
Otherwise, with SDL2, that key (KEY_SCANCODE_GRAVE) always generates the
newly added K_CONSOLE.
in_kbd has a new (SDL2-only) "auto" mode which tries to detect the
keyboard layout based on SDL_GetKeyFromScancode( SDL_SCANCODE_GRAVE ).
Wherever Sys_GetConsoleKey() is called, I now take the current state of
Shift into account, so we don't discard more chars than necessary, esp.
when they keyboard-layout (in_kbd) is *not* correctly set.
(TBH the only reason besides SDL1.2 to keep in_kbd around is to ignore
the char generated by the "console key" in the console..)
It's set to 0 by default (which is the original behavior), if set to 1,
SDL2 will grab the keyboard, so Alt-Tab or the Windows Key etc will not
be handled by the operating system but by dhewm3 (=> you can bind the
Windows key like any normal key and it won't open the start menu)
If a key is pressed whichs SDL_Keycode isn't known to Doom3 (has no
corresponding K_* constant), its SDL_Scancode is mapped to the
corresponding newly added K_SC_* scancode constant.
I think I have K_SC_* constants for all keys that differ between
keyboard layouts (which is mostly printable characters; F1-F12, Ctrl,
Shift, ... should be the same on all layouts, which means that e.g.
SDL_SCANCODE_F1 always belongs to SDLK_F1 which the old code already
maps to Doom3's K_F1).
What's extra nice (IMO) is that when Doom3 requests a *localized* name
of the key (like for showing in the bindings menu), we actually use the
name of the SDL_Keycode that *currently* belongs to the scancode, and
esp. the "Western High-ASCII characters" (ISO-8859-1) supported by Doom3
like Ä or Ñ are displayed correctly.
(I already implemented a very similar hack in Yamagi Quake II and
reused the list of scancodes)
This should fix most of the problems reported in #323
but it's still a bit wonky with DPI-scaling
I also made the rect calculations a bit more intuitive
and removed a misleading comment in my breakpoint list code
Double-clicking an entry opens the script at the correct line.
Single-clicking the breakpoint symbol in the list removes the breakpoint,
and so does selecting the breakpoint in the list and pressing the Del key.
Added Sys_FreeClipboardData(char*) so I don't have to copy the string
from SDL_GetClipboardText() into a Mem_Alloc() buffer, but can just
do the right thing per platform, which in case of POSIX/SDL2 is
SDL_free().
SDL1.2 doesn't have clipboard support, otherwise I'd have removed all
platform-specific implementations and used SDL_Get/SetClipboardText()
everywhere (IIRC AROS only supports SDL1.2?)
Now the game cycles between QuickSave, QuickSave2, QuickSave3, ...
(up to com_numQuicksaves files, 4 by default, up to 99), always
replacing the oldest.
Quick-loading always loads the newest quicksave, but all quicksaves
can be loaded via the load game menu.
this *shouldn't* matter, but due to some Mesa bug is does:
If the shaders have been loaded already (with R_LoadARBProgram()),
then loading them again (like from the `reloadARBprograms` console cmd
or as it happens if the `r_gammaInShader` has been modified) will
cause glitches with the open source radeonsi driver (maybe also with
others? at least the open source intel driver seems unaffected).
As r_gammaInShader was marked as modified at startup (before the shaders
were even loaded) they were loaded twice: First as expected when OpenGL
is initialized, then again in R_CheckCvars() which is executed each
frame. Marking as at not modified in R_InitOpenGL() prevents this and
thus works around the bug.
However this means that changing r_gammaInShader at runtime will still
trigger this bug (while with non-broken drivers it switches seamlessly
between gamma in shader and gamma in hardware without a vid_restart).
Originally sound updates only happened about every 100ms and
`sampleTime` (or `newSoundTime`) was a multiple of 4096
(`MIXBUFFER_SAMPLES`).
After I changed this to updates every 16ms and made the calculation of
`sampleTime` a lot simpler, it could be any value (as it's current
amount of milliseconds multiplied by 44.1).
It generally seemed to work, but it seems advisable to make it a
multiple of 8 (see also "Fix endless loop when decoding OGGs" commit).
So I round it to the nearest multiple of 8 now. Furthermore I increased
the accuracy when the game has been running for a long time by using
double instead of float, and tried to make sure that `sampleTime` is
always positive (or at least as long as `inTime` is positive).
idStr is used in both the main thread and the async sound thread, so
it should better be thread-safe.. idDynamicBlockAlloc is not.
Use realloc() and free() instead.
For some reason this caused a lot more crashes (due to inconsistencies
in the allocator's heap) with newer Linux distros (like XUbuntu 20.04)
and when using GCC9, while they rarely reproduced with GCC7 or on
XUbuntu 18.04
fixes#391
In idSampleDecoderLocal::DecodeOGG() `totalSamples` was 1 and
`reqSamples` was 0, which caused an endless loop.. this was caused by
idSoundWorldLocal::ReadFromSaveGame() setting
`chan->openalStreamingOffset` to an odd number, I think due to
`currentSoundTime` being an odd number.
To fix that, I round up `chan->openalStreamingOffset` to a (very) even
number, and to be double-sure I also added a check in DecodeOgg() to
make sure it exits the loop if `reqSamples` is 0.