.. by surrounding it with newlines.
This warning is shown when trying to start a mod without its game dll
(because the user has forgotten to unpack the mod dll archive, or
because it's a mod that doesn't have its own dll).
Surrounding it with empty lines hopefully makes it easier for users
to figure out what went wrong when they look at the log.
When selecting a light in the Radiant (builtin Windows-only level editor)
and pressing `j`, the light editor opened (as expected) but said that
no entity was selected.
That was because com_editorActive was false, most probably because of
e8a1eb8b Fix mouse remaining ungrabbed when running map from Radiant
which sets com_editorActive to false (via com->ActivateTool(false)) if
the Radiant window loses focus, which should be the case when opening
the light editor window.
`com_editors & EDITOR_RADIANT` is != 0 as long as the radiant is
running, no matter which window currently has focus, so it works better.
and add entry to changelog about absolute mouse input etc - this change
was already in RC1 but I forgot to mention it..
and fixed comments for GAME_NAME and ENGINE_VERSION (nowadays dhewm3
uses ENGINE_VERSION for the window title)
It happened a lot more since
504b572a Update sounds at ~60Hz instead of ~10Hz, fixes#141
(because then MixLoop() is more likely to be called in the narrow
timeframe this can happen during level load) but could happen before.
So far I only observed it when starting a new game in Classic Doom 3.
See comment in the change and #461 for more information.
Hoping that Team Future eventually fulfills their promise of releasing
the source of the highly praised Phobos mod, I'm adding it to the list.
Currently this is not very useful, but when (if) the phobos source
becomes available I at least won't have to do a new dhewm3 release to
support it (but only provide tfphobos.dll/.so/.dylib).
by adding special cases for them that set `fs_game_base d3xp`.
Unfortunately there is no more generic way to do this, as mods have
no way to tell the engine if they need fs_game_base.
The assertion that triggered was "assert(iconvDesc == (SDL_iconv_t)-1);"
in Sys_InitInput() - because when loading a mod the window is recreated,
calling Com_ReloadEngine_f() -> idCommonLocal::InitGame()
-> idCommonLocal::InitRenderSystem() -> idCommonLocal::InitOpenGL()
-> R_InitOpenGL() -> Sys_InitInput()
Before that Com_ReloadEngine_f() calls commonLocal.ShutdownGame( true );
which calls the equivalent Shutdown() functions - except so far no one
called Sys_ShutdownInput() (which closes iconvDesc and resets it to -1).
Fixed that by making idRenderSystemLocal::ShutdownOpenGL() call it.
In the savegame from that bugreport, "monster_zsec_shotgun_12" was
lying dead pretty much at its spawn point.
script/map_alphalabs4.script moves those "ride_of_death" platforms
around, and at the end of a cycle teleports "ride_of_death*_parent" back
to its starting position - and the "ride_of_death*" bound to it, which
is a pushing mover, just gets dragged along by the physics code and thus
can collide with that zombie cadaver, which then tries to push it along,
causing that assertion in TestHugeTranslation().
This is a map bug - Doom3 even prints a warning:
WARNING: script/map_alphalabs4.script(722): Thread
'map_alphalabs4::RideOfDeathPath': teleported 'ride_of_death2_parent'
which has the pushing mover 'ride_of_death2' bound to it
So I just disable that assertion for this specific case..
Also moved the assertion behind the corresponding warning, so that gets
printed before the assertion kills the game..
Also a small change to CMakeLists.txt that should make enabling
LINUX_RELEASE_BINS after CMake has already been run without it work
For some reason Wayland thought it would be clever to be the only
windowing system that (non-optionally) uses the alpha chan of the
window's default OpenGL framebuffer for window transparency.
This always caused glitches with dhewm3, as Doom3 uses that alpha-chan
for blending tricks (with GL_DST_ALPHA) - especially visible in the main
menu or when the flashlight is on.
So far the workaround has been r_waylandcompat which requests an OpenGL
context/visual without alpha chan (0 alpha bits), but that also causes
glitches.
There's an EGL extension that's supposed to fix this issue
(EGL_EXT_present_opaque), and newer SDL2 versions use it (when using
the wayland backend) - but unfortunately the Mesa implementation is
broken (seems to provide a visual without alpha channel even if one was
requested), see https://gitlab.freedesktop.org/mesa/mesa/-/issues/5886
and https://github.com/libsdl-org/SDL/pull/4306#issuecomment-1014770600
for the corresponding SDL2 discussion
To work around this issue, dhewm3 now disables the use of that EGL
extension and (optionally) makes sure the alpha channel is opaque at
the end of the frame.
This behavior is controlled with the r_fillWindowAlphaChan CVar:
If it's 1, this always is done (regardless if wayland is used or not),
if it's 0 it's not done (even on wayland),
if it's -1 (the default) it's only done if the SDL "video driver" is
wayland (this could be easily enhanced later in case other windowing
systems have the same issue)
r_waylandcompat has been removed (it never worked properly anyway),
so now the window always has an alpha chan
so if someone configured 16x AA on a system that doesn't support it
(like when using AMDs open source driver), 8x will be tried before
falling back to a 640x480 window with no AA at all.
(and then it'll try 4x and then 2x and then no AA at all, and only then
reducing color depth will start, and even later it'll fall back to
a small 640x480 window)
I don't have such hardware, so I can't test; I also didn't actually build
dhewm3 (would have to build dependencies for ARM first..), but when
configuring ARM or ARM64 targets for Visual C++ in CMake, that's now
correctly detected in CMake.
Also, now D3_ARCH should always be "arm" for 32bit ARM
and "arm64" for 64bit ARM, also when building with GCC or clang.
I was lazy and just renamed SDL_win32_main's stdout.txt - but I still
added the dhewm3log-old.txt backup function.
I also renamed Sys_GetHomeDir() to Win_GetHomeDir() as it's Win32-only
On Windows it's in Documents\My Games\dhewm3\dhewm3log.txt
and refactorings needed for that (I want to create the log right at the
beginning before much else has been initialized, so using things like
idStr or Sys_GetPath() isn't safe)
save_path being $XDG_DATA_HOME/dhewm3/ (usually ~/.local/share/dhewm3/)
on most POSIX systems, $HOME/Library/Application Support/dhewm3/ on Mac
If the log file already exists, it's renamed to dhewm3log-old.txt first,
so there's always the most recent and the last log available.
These are now used by idStr::(v)snPrintf(), and in the future can
be used if a (v)snprintf() that's guaranteed not to call
common->Warning() or similar is needed (e.g. used during early startup)
There were lots of places in the code that called Sys_GrabInput(),
some of them each frame.
Most of this is unified in events.cpp now, in handleMouseGrab() which
is called once per frame by Sys_GenerateEvents() - this makes reasoning
about when the mouse is grabbed and when not a lot easier.
Sys_GrabInput(false) still is called in a few places, before operations
that tend to take long (like loading a map or vid_restart), but
(hopefully) not regularly anymore.
The other big change is that the game now uses SDLs absolute mouse mode
for fullscreen menus (except the PDA which is an ugly hack), so the
ingame cursor is at the same position as the system cursor, which
especially helps when debugging with `in_nograb 1` and should also help
if someone wants to integrate an additional GUI toolkit like Dear ImGui.
glStencilOpSeparateATI() should behave exactly the same as
glStencilOpSeparate() so supporting it is easy enough and might help
some people with hardware or drivers that don't support OpenGL 2.0,
like the Mac OSX versions for PPC.
- Fix build with SDL <=2.0.3
SDL_GetGlobalMouseState was introduced in 2.0.4
(which doesn't support OSX 10.5 or older)
- Don't include execinfo.h on Mac OS X 10.4
This file isn't included in the 10.4 SDK
- Use custom typedef for PFNGLSTENCILOPSEPARATEPROC on OSX 10.4/10.5
because the system OpenGL headers for those versions don't have it
CMAKE_SYSTEM_PROCESSOR used to be broken, CMake "fixed" it by redefining
its meaning (from "Target CPU" to "Host CPU except when crosscompiling").
On Windows it always prints the host CPU, on Linux it at least made trouble
in chroots and when running 64bit kernels with 32bit userlands (this used
to be not totally uncommon on x86 before distros completely switched to
amd64, and apparently Raspbian/Raspberry Pi OS does this on RPi4, see #267)
Thankfully gcc and clang support "-dumpmachine" to print their (default)
target system, so use that instead (MSVC already had a special case).
On the upside, this allows getting rid of the MinGW special case.
I hope this also works with Apple Clang..
The problem was that negative values (from dhewm3tmpres.xyz) were passed
to POW, and POW doesn't have to support negative bases, according to
ARB_fragment_program.txt, and Intels Linux drive apparently doesn't,
see also https://gitlab.freedesktop.org/mesa/mesa/-/issues/5131
Using MUL_SAT instead of MUL to clamp the value that gets passed to POW
afterwards to [0, 1] fixes the problem without any disadvantages.
so far the code assumed that "result.color" is always used directly,
but the ARB shaders allow creating an alias with the aforementioned
syntax. So turn that into an variable-alias for dhewm3tmpres.
as long as it's chars Doom3 supports, i.e. it can be converted
to ISO-8859-1
also renamed kbdNames to _in_kbdNames to reduce likelyhood of clashes
(as it can't be static)
and scale the breakpoint dots accordingly - now they don't looked all
squashed anymore.
I think ResizeImageList() is more correct now, at least this helped with
the breakpoint dots.
The `const char* filename` arg is passed from idProgram::CompileText(),
where it's from idProgram::filename - and that filename can get modified
in idCompiler::NextToken() when it calls gameLocal.program.GetFilenum()
and if the idStr grows and reallocates for that modification,
the filename pointer becomes invalid.
So store `filename` in an idStr and use that when logging the
compile time.
If in_ignoreConsoleKey is set, the console can only be opened with
Shift+Esc, not `/^/whatever, so you can easily type whatever character
is on your "console key" into the game, or even bind that key.
Otherwise, with SDL2, that key (KEY_SCANCODE_GRAVE) always generates the
newly added K_CONSOLE.
in_kbd has a new (SDL2-only) "auto" mode which tries to detect the
keyboard layout based on SDL_GetKeyFromScancode( SDL_SCANCODE_GRAVE ).
Wherever Sys_GetConsoleKey() is called, I now take the current state of
Shift into account, so we don't discard more chars than necessary, esp.
when they keyboard-layout (in_kbd) is *not* correctly set.
(TBH the only reason besides SDL1.2 to keep in_kbd around is to ignore
the char generated by the "console key" in the console..)
It's set to 0 by default (which is the original behavior), if set to 1,
SDL2 will grab the keyboard, so Alt-Tab or the Windows Key etc will not
be handled by the operating system but by dhewm3 (=> you can bind the
Windows key like any normal key and it won't open the start menu)
If a key is pressed whichs SDL_Keycode isn't known to Doom3 (has no
corresponding K_* constant), its SDL_Scancode is mapped to the
corresponding newly added K_SC_* scancode constant.
I think I have K_SC_* constants for all keys that differ between
keyboard layouts (which is mostly printable characters; F1-F12, Ctrl,
Shift, ... should be the same on all layouts, which means that e.g.
SDL_SCANCODE_F1 always belongs to SDLK_F1 which the old code already
maps to Doom3's K_F1).
What's extra nice (IMO) is that when Doom3 requests a *localized* name
of the key (like for showing in the bindings menu), we actually use the
name of the SDL_Keycode that *currently* belongs to the scancode, and
esp. the "Western High-ASCII characters" (ISO-8859-1) supported by Doom3
like Ä or Ñ are displayed correctly.
(I already implemented a very similar hack in Yamagi Quake II and
reused the list of scancodes)
This should fix most of the problems reported in #323
but it's still a bit wonky with DPI-scaling
I also made the rect calculations a bit more intuitive
and removed a misleading comment in my breakpoint list code
Double-clicking an entry opens the script at the correct line.
Single-clicking the breakpoint symbol in the list removes the breakpoint,
and so does selecting the breakpoint in the list and pressing the Del key.
Added Sys_FreeClipboardData(char*) so I don't have to copy the string
from SDL_GetClipboardText() into a Mem_Alloc() buffer, but can just
do the right thing per platform, which in case of POSIX/SDL2 is
SDL_free().
SDL1.2 doesn't have clipboard support, otherwise I'd have removed all
platform-specific implementations and used SDL_Get/SetClipboardText()
everywhere (IIRC AROS only supports SDL1.2?)
Now the game cycles between QuickSave, QuickSave2, QuickSave3, ...
(up to com_numQuicksaves files, 4 by default, up to 99), always
replacing the oldest.
Quick-loading always loads the newest quicksave, but all quicksaves
can be loaded via the load game menu.
this *shouldn't* matter, but due to some Mesa bug is does:
If the shaders have been loaded already (with R_LoadARBProgram()),
then loading them again (like from the `reloadARBprograms` console cmd
or as it happens if the `r_gammaInShader` has been modified) will
cause glitches with the open source radeonsi driver (maybe also with
others? at least the open source intel driver seems unaffected).
As r_gammaInShader was marked as modified at startup (before the shaders
were even loaded) they were loaded twice: First as expected when OpenGL
is initialized, then again in R_CheckCvars() which is executed each
frame. Marking as at not modified in R_InitOpenGL() prevents this and
thus works around the bug.
However this means that changing r_gammaInShader at runtime will still
trigger this bug (while with non-broken drivers it switches seamlessly
between gamma in shader and gamma in hardware without a vid_restart).
Originally sound updates only happened about every 100ms and
`sampleTime` (or `newSoundTime`) was a multiple of 4096
(`MIXBUFFER_SAMPLES`).
After I changed this to updates every 16ms and made the calculation of
`sampleTime` a lot simpler, it could be any value (as it's current
amount of milliseconds multiplied by 44.1).
It generally seemed to work, but it seems advisable to make it a
multiple of 8 (see also "Fix endless loop when decoding OGGs" commit).
So I round it to the nearest multiple of 8 now. Furthermore I increased
the accuracy when the game has been running for a long time by using
double instead of float, and tried to make sure that `sampleTime` is
always positive (or at least as long as `inTime` is positive).
idStr is used in both the main thread and the async sound thread, so
it should better be thread-safe.. idDynamicBlockAlloc is not.
Use realloc() and free() instead.
For some reason this caused a lot more crashes (due to inconsistencies
in the allocator's heap) with newer Linux distros (like XUbuntu 20.04)
and when using GCC9, while they rarely reproduced with GCC7 or on
XUbuntu 18.04
fixes#391
In idSampleDecoderLocal::DecodeOGG() `totalSamples` was 1 and
`reqSamples` was 0, which caused an endless loop.. this was caused by
idSoundWorldLocal::ReadFromSaveGame() setting
`chan->openalStreamingOffset` to an odd number, I think due to
`currentSoundTime` being an odd number.
To fix that, I round up `chan->openalStreamingOffset` to a (very) even
number, and to be double-sure I also added a check in DecodeOgg() to
make sure it exits the loop if `reqSamples` is 0.
If those functions (e.g. called by common->Printf(), common->Error())
weren't called from the mainthread and win_outputEditString was set to 1,
a deadlock could occur.
Specifically, the async thread (handling sound) was calling
common->Warning() -> Sys_Printf() -> Conbuf_AppendText() which called
SendMessageA() which blocks until the main thread handles the message.
The main thread however was in idSampleDecoderLocal::Decode() trying to
enter CRITICAL_SECTION_ONE, which was held by the async thread
(it's used to synchronize sound handling between main and async thread).
So now if Sys_Printf() (or Sys_Error() which should have the same problem)
is not called by the main thread, the text is buffered and
Conbuf_AppendText() is called for the buffered lines in the next frame
in Win_Frame().
In idSampleDecoderLocal::DecodeOGG() sometimes (esp. in The Lost Mission
mod) it happens that stb_vorbis_get_samples_float() decodes one sample
less than expected so one is left and when trying to decode that,
stb_vorbis_get_samples_float() returns 0, which we interpreted as an error.
This case is now handled more gracefully: No warning is printed (except
if developer 1) and failed is not set (setting it would prevent the sound
from being played again, I think).
the script paths were wrong, on Linux they were like
"pak000.pk4/script/doom_util.script" while on Windows it's only
"script/doom_util.script".
Fixed idFileSystemLocal::OSPathToRelativePath() to skip ...pk4/
also fixed GCC compile error in Common.cpp
only override cmd->viewDef in RB_DrawView() if we're drawing the
primary view (which for several calculations before actual drawing
was set to the saved/locked render view)
Note that r_lockSurfaces is more useful with r_useScissor 0 (otherwise
there's black bars over the screen when moving) and r_shadows 0 (otherwise
areas that weren't visible when locking are black because the lights
there are skipped)
remaining bug: gui surfaces move around the screen when looking around
set(CMAKE_REQUIRED_LIBRARIES backtrace) tells our custom libbacktrace
availability check that it needs to link against libbacktrace.
Seems like it also tells other unrelated compiler-checks like for
-fvisibility=hidden to link against libbacktrace, so if it's not
available they fail as well.
Fixed by unsetting CMAKE_REQUIRED_LIBRARIES after the backtrace check.
While debian-based distros ship libbacktrace as part of libgcc,
apparently in Arch Linux and openSUSE (and possibly others) it's a
separate package, so I mantion it in the README as an (optional)
dependency now and made CMake print a warning if it's not found.
according to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100839
the real compiler flag enabling this bullshit isn't
-fexpensive-optimizations but -ffp-contract=fast which for some(*)
reason is default in optimized builds.
I think it's best to disabled that optimization globally in case it
also breaks other code (I really don't want to spend several days to
hunt such an idiot bug again). I really doubt it makes any measurable
performance difference.
As https://twitter.com/stephentyrone/status/1399424911328874499 says
that clang might also enable this in the future (though to =on instead
of =fast which should be a bit saner but would still break our code),
just set this option for all GCC-ish compilers incl. clang.
(*) the reason of course is that GCC developers don't develop GCC for
their users but to win idiotic SPEC benchmarks
Only happend if `ONATIVE` was enabled (or some other flag was set
that enables the FMA extension), the root cause was that the cross
product didn't return 0 when it should, but a small value < 0.
Caused some faces to be missing in maps compiled with dmap.
https://github.com/RobertBeckebans/RBDOOM-3-BFG/issues/436#issuecomment-851061826
has lots of explanation.
I think this is a compiler bug, this commit works around it.
fixes#147
I'm gonna use it with libbacktrace - I'll need the path of the
executable before I can use idStr (and thus before I could call
Sys_GetPath(PATH_EXE, str)).
The problem was that the editors called ChoosePixelFormat() instead of
wglChoosePixelFormatARB() - and the normal ChoosePixelFormat() has no
attribute for MSAA, so if MSAA is enabled (by SDL2 which calls the wgl
variant), ChoosePixelFormat() will return an incomaptible format and
the editors don't get a working OpenGL context.
So I wrote a wrapper around ChoosePixelFormat() that calls the wgl variant
if available, and all the necessary plumbing around that.
While at it, removed the unused qwgl*PixelFormat function pointers and
supressed the "inconsistent dll linkage" warnings for the gl stubs
Minimum required Windows version is XP again (instead of Win10).
Win_GetWindowScalingFactor() tries to use two dynamically loaded functions
from newer windows versions (8.1+, Win10 1607+) and has a fallback for
older versions that also seems to work (at least if all displays have
the same DPI).
Moved the function to win_main.cpp so the dynamically loaded functions
can be loaded at startup; so edit_gui_common.cpp could be removed again.
- Upped windows version ( needs to go back, and a runtime dpi check comes in place )
- Changed warning settings for msvc to confirm latest compilers
- Fixed up _afxdll
In idDeclManagerLocal::CreateNewDecl() decl->self->DefaultDefinition();
crashed because self was uninitialized.
Now it gets set to NULL in the constructor and initialized to something
sensible in decl->AllocateSelf();
this is part of #378
it wants to store a pointer to itself in an idWinVar - on 32bit idWinInt
was suitable for that, on 64bit it's not, so instead convert the pointer
to a hex-string and stuff it in a idWinStr
also fix a crash when adding a choiceDef in the gui editor
All pointer<->integer conversion truncation warnings I'm aware of are
now enabled for MSVC, to hopefully finally get that tool code 64bit-clean.
I had to adjust the idCVar code for this - it should've been safe enough
(highly unlikely that a valid pointer is exactly 0xFFFFFFFF on 64bit),
but triggered those warnings - now it should behave the same as before
but the warnings (which are now errors) are silenced.
The only thing that still prevents dhewm3 from building with these
warnings-as-errors is rvGEWindowWrapper
rvGEWindowWrapper is still TODO - it's not as straightforward, because
it insists on storing a pointer in an idWinVar (using idWinInt), but there
is no idWinVar type for pointers
the commented out code clamped the loaded filesize of affected .dds
images (crash observed with dds/guis/assets/splash/pdtempa.dds) to
200KB - but then later tries to load it and skip the first mipmap,
resuling in reading invalid memory (> 200KB into the file).
No idea what this was supposed to achieve, but it's disabled now
and the crash (at startup) is gone.
fixes#374
It can be set to a value between 0.0 and 1.0.
1.0 sounds like it did before introducing this cvar; as many people
found the the effect way to strong, I made 0.5 the default value
Otherwise especially looped sounds continue playing while the menu
is open, especially noticeable when opening the menu while firing
the chaingun (the whirring sound continues playing).
The code to decompress OGG files directly on load if they're shorter
than s_decompressionLimit seconds (usually 6) accidentally got broken
when removing the non-OpenAL soundbackends:
`if ( objectInfo.wFormatTag == WAVE_FORMAT_TAG_OGG )` slipped into the
`if ( objectInfo.wFormatTag == WAVE_FORMAT_TAG_PCM )` block - obviously
both can't be true at the same time so the OGG case was never executed.
Now it's in its own block again. I put it into a {} scope so it doesn't
have to be re-indented (=> more useful with git blame)
Seems to work; note that idWaveFile is only ever used in idSoundSample::Load()
As stb_vorbis doesn't support custom callbacks for reading, I feed it
the full .ogg files as a buffer. Shouldn't make much of a difference
though - either the whole file is decoded on load anyway (so the buffer
is freed after decoding, or it's streamed, but in that case the old code
also kept the whole ogg file in memory by using idFily_Memory.
I also added warning messages in places where calls to stb_vorbis_*()
can fail, where there were none in the equivalent libvorbis code.
libjpeg is a pain in the ass, especially due to Ubuntu shipping
libjpeg-turbo in jpeg8 mode as their default libjpeg, while every other
distro I checked (including debian!) ships libjpeg-turbo in jpeg6.2 mode
Thankfully stb_image.h exists - just a single header and it even
has a (much!) friendlier API.
It's not like Doom3 (or any of the mods I checked) actually use JPEGs,
but I'm sure if I'd drop support completely, someone would complain
(perhaps rightfully so).
the problem was that the sources were still associated to it, because
they get deleted after the listenerSlot: the sources get freed in
idSoundSystemLocal::Shutdown() which is called by
idCommonLocal::ShutdownGame() in line 3217,
while the listenerSlot is deleted via idSessionLocal::Shutdown()
-> delete sw/rw -> idSoundWorldLocal::~idSoundWorldLocal()
-> idSoundWorldLocal::Shutdown() - and idSessionLocal::Shutdown() is
called in idCommonLocal::ShutdownGame() line 3211, before the other.
I'm not gonna mess with the order of deleting things in ShutdownGame(),
but it's sufficient to unassociate the effect slot from the source
when destroying the emitters in idSoundWorldLocal::Shutdown(),
by adding a call for that to idSoundChannel::ALStop() - and destroying
the emitters before deleting listenerSlot.
Before this fix, with ALSOFT_LOGLEVEL=3 you got the following warning:
(WW) Error generated on context 0x5578fce2a280, code 0xa004,
"Deleting in-use effect slot 1"
Thanks for openal-soft's KittyCat for pointing this out!
Turns out that ${CMAKE_SYSTEM_PROCESSOR} is broken on Windows
(both for Visual Studio and for 32bit MinGW/msys shells which for some
reason seem to output x86_64 in uname -a)
Also suppressed -Wclass-memaccess warnings from GCC >= 8
idClipModel::axis is an idMat3 rotation matrix.
Usually it's an identity matrix, but if the player is pushed around by
an idPush entity it's modified and apparently can (wrongly) remain
modified, possibly when saving while idPush is active.
This seems to happen sometimes on the crashing elevator in game/delta1.
The fix/workaround is to reset it to mat3_identity when loading a
savegame.
like the dhewm3 version and the OS and architecture of the dhewm3
version that created the savegame.
Also added an internalSavegameVersion so be independent of BUILD_NUMBER
fixes#344
"Fix "t->c->value.argSize == func->parmTotal" Assertion in Scripts, #303"
had broken old savegames because the script checksum
(idProgram::CalculateChecksum()) changed, see #344.
This is fixed now, also the BUILD_NUMBER is increased so old savegames
can be identified for a workaround.
Don't use this commit without the next one which will further modify the
savegame format (for the new BUILD_NUMBER 1305)
This reverts commit 01ac144b09.
Looks like this broke the build on some systems, because some
package managers pack SDL2Config.cmake and others pack sdl2-config.cmake
in their SDL2 development packages (for some reason, SDL2 seems
to ship both in their source, and they appear to be incompatible).
Shipping our own FindSDL2.cmake might be unclean/ugly/whatever, but
at least it works (most of the time? at least it appears to work better
than not shipping it)
* Windows DLL-Load-Errors
Added error reporting using formatmessage, ignoring if the DLL just
doesn't exist, custom warning for "[193 (0xC1)] is not a valid Win32
application." (probably wrong architecture)
* update gitignore with build folder
.. but only if the file exists.
It's ok if mods don't have their own DLL/.so, but if they do have one
and loading fails it's interesting why they failed (e.g. no access
rights, 64bit lib with 32bit executable or other way around, missing
symbols due to wrong libc version, ...)
The same should be done for Windows, but that's still TODO.
I updated sdl2-config.cmake in dhewm3-libs so building on Windows with
them works again, however that required also a little change in
dhewm3's CMakeLists.txt
I also still had some uncommited changes for that fullscreen workaround
that I'm committing now.
.. that doesn't switch the display resolution, but creates a borderless
fullscreen window at current desktop resolution.
SDL2-only (SDL_WINDOW_FULLSCREEN_DESKTOP).
Doing this with an additional CVar instead of r_fullscreen 2 or similar
has the advantage that it works properly with Alt-Enter
SDL has a bug (at least on Windows) where SDL_CreateWindow() with
SDL_WINDOW_FULLSCREEN doesn't use the configured resolution (if it's
higher than the current desktop resolution).
Try to work around that - based on Yamagi Quake II code.
Also, if GLimp_Init() fails, the "safe mode" fallback is now in
windowed mode instead of fullscreen mode.
The code used #ifdef __ppc__, probably a leftover from old Mac support,
which apparently doesn't work on all PPC systems (maybe not 64bit PPC).
Now using the LittleFloat() function instead which exists for
exactly this purpose and already handles endianess properly.
If a "class" (object) in a Script has multiple member function
prototypes, and one function implementation later calls another before
that is implemented, there was an assertion when the script was parsed
(at map start), because the size of function arguments at the call-site
didn't match what the function expected - because the function hadn't
calculated that size yet, that only happened once its own
implementation was parsed.
Now it's calculated (and stored) when the prototype/declaration is
parsed to prevent this issue, which seems to be kinda common with Mods,
esp. Script-only mods, as the release game DLLs had Assertions disabled.
It corrupted the stack when called with buffers allocated on the stack
and numSamples that are not a multiple of four, apparently, by writing
4 floats too many, at least in the 22KHz Stereo case..
This caused the crash described in
https://github.com/dhewm/dhewm3/issues/303#issuecomment-678809662
Now it just uses the generic C code, like all platforms besides MSVC/x86
already do.
Apparently VS2019 (or its integrated CMake?) interpreted
add_definitions(/MP) as "also pass it to rc.exe" which didn't like it..
no idea why this worked before, but now it seems to work everywhere.
fix#304
Also set VS_STARTUP_PROJECT so when launching the debugger in VS it doesn't
try to launch ALL_BUILD (...) but dhewm3.
idSessionLocal::SaveGame() uses saveName for both the description in
savegames/bla.txt and the filename itself ("bla" in this case), which
is a "scrubbed" version of that string ("file extension" removed,
problematic chars replaced with '_').
In case of Autosaves, MoveToNewMap() passes a translated name of the
map as saveName, which makes sense for the description, but not so much
for the filename.
In Spanish the Alpha Labs names are like "LAB. ALFA 1", "LAB. ALFA 2" ..
and everything after "LAB" is cut off as a file extension - so every
autosave of an Alpha Labs part overwrites the last one, as they all get
the same name...
The solution is to pass an additional (optional) argument saveFileName
to idSessionLocal::SaveGame(), and for autosaves pass the mapname
(which is the filename without .map; yes, it might contain slashes,
but the scrubber will turn them into _) instead of the translated name
as saveFileName.
If you save, you get a message like "Game Saved..." which goes away
after a few seconds. This happens at the very end of idPlayer::Save():
if ( hud ) {
hud->SetStateString( "message", /* .. left out .. */ );
hud->HandleNamedEvent( "Message" );
}
And handled in hud.gui, "onNamedEvent Message { ..."
However, if you save again before it's gone, it'll be shown after
loading the savegame and not go away until you save again..
This works around that issue by setting an empty message after loading
a savegame.
The underlying problem (which is not fixed!) seems to be that the
transition GUI command (that's executed when hud.gui handles the
"Message" event that's used to show this message) is probably not
properly saved/restored so fading out the message isn't continued
after loading.
idStr::StripFileExtension() (and SetFileExtension() which uses it) and
others didn't work correctly if there was a dot in a directory name,
because they just searched from last to first char for '.', so if the
current filename didn't have an extension to cut off, they'd just cut off
at any other '.' they found.
So D:\dev\doom3.data\base\maps\bla could turn into D:\dev\doom3
(or, for SetFileExtension(), D:\dev\doom3.map)
While at it, I made most of the idStr code that explicitly checked for '\\'
and '/' (and maybe ':' for AROS) use a little "bool isDirSeparator(int c)"
function so we don't have the #ifdefs for different platforms all over
the place.
the game was frozen (the main menu and console worked though) when
switching from the Radiant to the engine (with F2 or that button).
Turned out common->ActivateTool( false ) must be called if the game window
has forcus, so idSessionLocal::Frame() doesn't return early (and thus not
run the game code).
Furthermore, there was no sound when switching from Radiant to the game,
because the SoundWorld was set to sth editor-specific. Fixed that as well.
I worked around the issue for the particle editor, but now it turned out
it can also somehow happen when switching from the Radiant to the Engine
(with F2), so I implemented the "proper" fix of restoring
glConfig.vidWidth/Height, that are overwritten in
RenderSystemLocal::BeginFrame(), in RenderSystemLocal::EndFrame().
That could happen if the searchpath was "" (like the default value of
fs_cdpath ...).
I don't think it could do any damage, but it's confusing (if "developer"
CVar is enabled) and pointless.
It caused lots of "Non-portable: path contains uppercase characters: .."
Warnings, which now also become more readable by adding a newline.
If REPRODUCIBLE_BUILD is enabled in CMake, the code will not use
__DATE__ or __TIME__ macros, but instead hardcoded values.
Wherever __DATE__ or __TIME__ were previously used, we now use
ID__DATE__ and ID__TIME__, which are either set to hardcoded values or
to __DATE__ and __TIME__ in neo/framework/Licensee.h.
For some reason sounds were only updated/started about every 100ms,
which could lead to delays of up to around 100ms after a sound gets
started (with idSoundEmitterLocal::StartSound()) until OpenAL is finally
told to play the sound.
Also, the actual delay drifted over time between 1ms and 100ms, as the
sound ticks weren't a fixed multiple of the (16ms) gameticks - and the
sound updates didn't even happen at the regular 92-94ms intervals they
should because they run in the async thread which only updates every
16ms...
Because of this, the machine gun and other rapid firing weapons sounded
like they shot in bursts or left out shots occasionally, even though
they don't.
Anyway, now sound is updated every 16ms in the async thread so delays
are <= 16ms and hopefully less noticeable.
You can still get the old behavior with com_asyncSound 2 if you want.
OpenAL devices can disconnect, and with some luck they're back after
a few seconds. This especially seems to happen with Intels Windows GPU
driver and display-audio when switching the resolution or enabling
fullscreen, see #209
Now a disconnect is detected and we try to reset the device for 20
seconds, hoping it comes back. This needs at least openal-soft 1.17.0
to build and 1.20.0 or newer to actually work.
Also added missing stub functions in openal_stub.cpp (used by dedicated
server so it doesn't have to link libopenal)
idSoundSystemLocal::Init() didn't consider the case that opening the
default device (alcOpenDevice(NULL)) could fail, but this can happen if no
audio devices exist (or all are disabled or they're sleeping bluetooth
headphones or whatever).
Starting a map failed then with
ERROR: idSoundCache: error generating OpenAL hardware buffer
Now this and "s_noSound 1" are handled the same and one can play.
If the main sound in a shader was a .wav
(soundShader->entries[0]->hardwareBuffer == true), only that sound was
played (and looped with AL_LOOPING), even if a leadin was configured.
If the main sound was an .ogg it worked.
Not it should always work.