.. when setting IMGUI=OFF in CMake, of course.
I don't think it's a very good idea to remove that directory, or to
disable ImGui support for any reason besides "I'm on an ancient
platform that doesn't support C++11", but this is an simple enough
change so whatever.
there are lots of values between
GL_COMPRESSED_RGBA_S3TC_DXT5_EXT (0x83F3) and
GL_COMPRESSED_RGBA_BPTC_UNORM (0x8E8C). We don't support them as valid
compressed formats.
(strictly speaking BPTC is not DXT so the function name doesn't fit 100%
anymore, but DXT1-5 is now BC1-3 and BPTC is BC7 so.. close enough.)
Apparently OpenGL (or at least nvidias driver?) is unhappy if the
smallest mipmaplevel isn't 1x1 pixels. It doesn't show any error in
debug messages, but it's just set to black.
This can be worked around by setting GL_TEXTURE_MAX_LEVEL
Can have better image quality than S3TC (DXT1-5).
based on a patch by Github user "Manoa1911":
https://github.com/dhewm/dhewm3/issues/447#issuecomment-2254369525
Only supports DXGI_FORMAT_BC7_UNORM - I hope that's enough?
BC7/BPTC is supported by all GPUs that support DX11 (or newer)
or OpenGL 4.2 (or newer). That should be the case for Radeon HD 5000
and newer, Geforce 400 and newer and Intel iGPUs from Ivy Bridge on.
Those GPUs are from 2009/2010, 2012 for Intel.
fix#447
it has a member with a vtable that gets overwritten then, that's bad
(even though I've never seen a crash caused by this?!)
Instead set the members to NULL/zero manually
After loading a texture, Doom3 calculates an MD4-sum of it.. this is
mostly pointless and only used for the "reportImageDuplication" console
command, but whatever.
The problem here was that the image was 32000x2000 pixels (due to some
error when creating it) which dhewm3 wanted to convert to the next
bigger power-of-two values with R_ResampleTexture(), but that function
clamps to 4096x4096, so the actually used pixeldata was for 2048x4096.
However, R_ResampleTexture() didn't communicate this to its caller,
and thus a too big size was used for calculating the MD4-sum and it
crashed.
That's fixed now and also a warning is printed about this.
BUILD_CPU has been replaced by D3_ARCH, which is also set by CMake on
most platforms, except for Windows, there it's set in neo/sys/platform.h
because CMake is not able to tell us what CPU platforms it's targeting
(for other platforms we parse the output of gcc/clang's -dumpmachine
option, but for MSVC that's not an option, of course)
in SkinDeep regs not being initialized caused random crashes
(in dhewm3 I haven't seen that so far, but fixing this won't hurt).
From SkinDeep commit message:
In idRegister::SetToRegs() at `registers[ regs[ i ] ] = v[i];`
regs[i] contained values like 21845 or 22010 or 32272, even though
the static registers array that's written to there only holds 4096
elements (it's `static float regs[MAX_EXPRESSION_REGISTERS];`
from `idWindow::EvalRegs()`).
So it overwrites other data, likely other global variables, like
`gameLocal.entities[4967]`, that now contain garbage and next time
someone tries to use them, bad things happen.
In this case, if someone tries to dereference gameLocal.entities[i]
and the pointer at i contains garbage, there's a segfault (crash).
462404af67
somehow the collision code managed to spread NaNs on Win32, which caused
a horrible framerate, "GetPointOutsideObstacles: no valid point found"
warnings in the console and assertions in debug builds.
Didn't happen in Vanilla Doom3 though.
At the location I changed the code in, I saw the following values in the
debugger:
normal: {x=0.00610326231 y=5.58793545e-09 z=1.19209290e-07 }
trmEdge->start: {x=-1358.00000 y=913.948975 z=25.2637405 }
start: {x=-1358.00000 y=916.000000 z=34.0000000 }
end: {x=-1358.00000 y=810.000000 z=34.0000000 }
dist (normal*trmEdge->start): -8.28822231
d1: 9.53674316e-07
d2: 9.53674316e-07
f1 (d1/(d1-d2)): inf
"normal" isn't normalized and also very small (in all directions),
"start" and "end" have quite different y values, but still doing scalar
multiplications of each with "normal" gave the same result..
No idea what this all means exactly, but checking if d1 - d2 is (almost)
0 to prevent INF solved the problems. In the end it will be some tiny
differences in floating point calculations between different platforms
and compilers..
In my test d1-d2 was exactly 0, but I compare with FLT_EPSILON to be
on the safer side.
incl. backwards compat for older savegames.
only partly useful: old savegames only work if you didn't change the
gamedata, with the CstDoom3 .gui files, loading them crashes. I don't
think that can be avoided, apparently Doom3 has no way to detect that
the GUIs have changed?
In idWindow::Redraw(), I had to make sure the menu scale fix (which,
if enabled for a window, renders that in 4:3 with empty or black bars
on the side if needed for widescreen etc, instead of stretching it)
is disabled if a window uses CST anchors, because the CST anchor code
also adjusts for the display aspect ratio and if we do both, things get
distorted in the other way.
The biggest change is that idDeviceContext::DrawStretchPic(Rotated) now
has code to adjust the coordinates for both CST and the menu scale fix,
so idDeviceContext::AdjustCoords() is mostly obsolete - it's only still
used by idRenderWindow.
Unlike DstDoom3 now that extra adjustCoords argument to those Draw
functions indicates that any coordinate adjustment should be done, so
if it's set by a caller, it's set to true.
I removed idDeviceContext::AdjustCursorCoords() because it was only used
in one place anyway
By writing that info into the demo when recording it (when demos are
played back, mylevel.map isn't read, only mylevel.proc, so the
worldspawn can't be accessed to get allow_nospecular from there)
D3::ImGuiHooks::NewFrame() was still called every frame, but EndFrame()
wasn't because idSessionLocal::UpdateScreen() exited early.
This caused an assertion in Dear ImGui, because it doesn't like calling
NewFrame() if it has been called before without EndFrame() afterwards
based on https://github.com/dhewm/dhewm3/pull/254
The "nospecular" parm will only be used if either
r_supportNoSpecular is set to 1
or r_supportNoSpecular is set to -1 (the default) and the maps spawnargs
contain "allow_nospecular" "1"
This probably doesn't work with (time)demos yet, because I think when
they're being played I can't access the worldspawn entity
If it (or Documents/My Games/dhewm3/) can't be created, show a windows
MessageBox with an error message and exit.
Would've made #544 easier to figure out
At least VS2017 doesn't like the big string literal of
proggyvector_font_base85.h (its limit is 64KB, Error C1091), so go back to
using proggyvector_font.h (which contains an int32 array) for MSVC..
Keep the base85 version around for proper compilers, because (unlike
the non-base85 version of the font) it works on Big Endian machines.
It seems like VS2022, maybe even some point release of VS2019 removed this
limitation (our CI build succeeds), but I couldn't find any details about
that change.
like in Doom3 BFG: If it's set to 1, no autosaves are created when
entering a level. Defaults to 0 (autosaves enabled)
While at it, I also documented com_numQuicksaves in Configuration.md
based on a fix from @dezo2 from the >60Hz support branch
(TBH I don't know why the crosshair must be scaled to 4:3 but the
grabber cursor not, but this works..)
Modern mice support ridiculously high DPI values, >20'000.
Not sure what that's actually good for, but if people use that, they
ran into the "idUsercmdGenLocal::MouseMove: Ignoring ridiculous
mouse delta" case which just threw away the mouse input values so the
game didn't respond to mouse input anymore or at least felt choppy.
I'm not sure what that code was originally good for, under which
(undesired) circumstances that happened, but for now it's disabled,
only the warning is still logged, but only once.
For these high DPI values to still be usable (camera not moving way
too fast), it probably makes sense if the mouse sensitivity can be set
to values < 1.0. The CVar always supported that, but I adjusted the
Dhewm3SettingsMenu so it sensitivity can also be set to values between
0.01 and 1 there (still going up to 30, like before).
fixes#616
fixes#632
The bug was most probably not caused by D3_SDL_X11 but by
GetDefaultDPI() returning -1.0 which GetDefaultScale() then divided by
96 and rounded to 0.0, which is not a good scaling factor.
I decided to kick the D3_SDL_X11 special case anyway.
also changed that logic a bit so FormatMessage() is only called when
actually used
and while at it, fixed the build with mingw-w64 on my system
(somehow an SDL header used strcmp() and that didn't work with
`#define strcmp idStr::Cmp` from Str.h)
When requesting < 1 MB, _alloca16() is used, otherwise Mem_Alloc16().
Furthermore you must pass a bool that will get true assigned if the
memory has been allocated on the stack, else false.
At the end of the function you must call Mem_FreeA( ptr, onStack )
(where onStack is the aforementioned bool), so Mem_Free16() can be
called if it was allocated on the heap.
idInterpreter::Push() is used only for int and (reinterpreted) float
values, not pointers (as far as I can tell), so 32bit values on all
relevant platforms.
It stored its value as intptr_t at `&localstack[ localstackUsed ]` - on
64bit platforms intptr_t is 64bit.
Unfortunately, all code reading from the stack just get got a pointer
to `&localstack[ localstackUsed ]` in the type they want to read
(like `int*` or `float*`) and read that. On Little Endian that happens
to work, on 64bit Big Endian it reads the wrong 4 bytes of the intptr_t,
so it doesn't work.
fixes#625, #472
All that code is kinda obfuscated, but the integer passing was plain
wrong (if sizeof(int) != sizeof(intptr_t), esp. noticeable on
Big Endian).
data[i] is used by Callbacks.cpp, and for everything but floats it's
passed directly as an argument (interpreted as either an integer or
a pointer to idVec3 or whatever).
So storing an int in there with `( *( int * )&data[ i ] ) = int(...)`
only sets the first 4 bytes of that intptr_t, which is 8 bytes on 64bit
machines. On Little Endian that just happens to work, on Big Endian
it's the wrong 4 bytes.