Combs all Raw search paths to find game dirs containing PAK/PK2/PK3
files. If multiple uniquely-named directories exist, then show a "mods"
option on the "Game" menu and allow selection of desired mod on new
eponymous submenu. Includes fix for memory leak of mapnames (read from
"maps.lst") when changing games.
When the "game" directory is changed, clear the current list of maps in
the "start network server" menu so that it will be re-initialized the
next time the menu is accessed.
so it waits for about the time of one frame at 60fps, but independently
of the actual framerate.
Without this fix, wait is broken unless vsync is on, because
CBuf_Execute() is called about 1000 times per second by Qcommon_Frame(),
even if no render- or packet-frame is executed.
(vsync "fixes" this because then we have a real wait at the end of each
renderframe)
We busy-looped for 5000 microseconds, i.e. 5 milliseconds, which reduces
the framerate to < 200fps
I guess the value was copypasted from Sys_Nanosleep() below, but
that was nanoseconds of course..
Anyway, busy-looping for 5 *micro*seconds instead fixes it.
This is one these constructs which makes you wonder how it could ever
work. When querying a cvar by calling Cvar_Get(), the default value
(given in `var_value`) is copied into `cvar_t->default_string`. If a
NULL pointer is given in `var_value`, the NULL pointer is passed to
CopyString() and dereferenced. The game crashes. There's already a NULL
pointer check in the 'cvar wasn't found' branch, but none in the 'cvar
was found' branch... Moving the check to the beginning of the function
isn't an option, because at least lithium2 doesn't implement a NULL
pointer check either. We would just move the crash from the server into
the game.dll. Therefore copy an empty string into
cvar_t->default_string` when a NULL pointer was passed in `cvar_value`
and the cvar was found. Pass the empty string trough `CopyString()` to
get an Z_MAlloc() allocation for it, otherwise we would call `Z_Free()`
on an unallocated object further down below.
Reported by Chris Stewart.
This allows really bug configuration files up to 32k. It would be better
to switch the global buffer to something allocated at runtime, but thats
non-trivial... This change should be save, since the buffers are global
(allocated in the BSS) and not included in savegames nor send over the
network.
Closes#582.
It's an often reported, that the q2ded dedicated server consumes huges
amounts of CPU time. That's because users don't know that `busywait`
must be set to `0`. Since there's no point in using busywaits in the
dedicated server (the network jitter is always bigger than the
jitter caused by nanosleep() and equivalents), just force q2ded to
use nanosleep().
I don't remember why we restricted it to client startup. The original
code executed it everytime when `game` changed... Revert to that
behavior. Look here if some problems come up. ;)
Closes#544.
This is an enhancement to the previous `yield` work:
* Don't enforce `-march=armv8-a` for aarch64 builds, because it is the
initial ARMv8 revision and compilers will either use that or something
newer.
* Refine preprocessor guards around `asm("yield");` so the code isn't
compiled in if unsupported by the current `-march='.
Submitted by @smcv in #535.
YQ2 has a much more precise Sys_Milliseconds() than Vanilla Quake II and
it always start at 0, not some other semirandom value. If the client is
started by `./quake2 +connect example.com" or all user just walk their
way to the menu there's a very high propability that two ore more
clients end up with the same qport... We can't use rand(), because we're
always starting with the same seed, so all clients generate more or less
the same random numbers and we end up in the same situation.
So just call time(). It's portable and more or less in line what the
original code did for Windows. It may be necessary to implement some
kind of fallback logic just in case that still two clients end up with
the same qport, but that's a task for another day.
Closes#537.
C11 _Noreturn is only accepted on function declarations, not on function
pointers, so we can't use it on callbacks like game_import_t.error and
refimport_t.Sys_Error. Use a separate macro for those.
The problematic situation doesn't currently happen because the Makefile
hard-codes -std=gnu99, which disables C11 features; but removing
-std=gnu99 (resulting in the compiler's default, currently gnu11) causes
compilation failures with at least gcc 9.x.
Signed-off-by: Simon McVittie <smcv@debian.org>
Until now CFGDIR was hardcoded to YamagiQ2 on Windows and .yq2 on
everything else. Sometimes it's desireable to have a separate dir
for some tasks, for example whentesting things that introduce new
cvars. Add -cfgdir to override CFGDIR.
This allows for longer arguments passed to cvars, gl_nolerp_list is a
good example for a case were a token length of 128 is too short. Keep
the mapname[] buffer in the server struct at 128 bytes to preserve
savegame compatibility.
Closes#526.
It's been over two years since we merged it into the master. @0lvin has
done a wonderfull job in maintaining it, he fixed a lot of bugs, did a
fair amount of enhancement, etc. There weren't any bug reports for the
last 6 month, it looks like that it's more or less stable right now. So
don't scare the users by calling it experimental.
cvar operations are special commands that allow the programmatic
manipulation of cvar values. 'reset' resets a given cvar to it's
default value, e.g. `reset r_mode' would reset `r_mode` to `4`.
'resetall' resets all known cvar with the exception of `game`.
The code is based upon q2pro.
This is part of issue #414.
At least with MinGW on Windows vsnprintf() treats buffer < size as an
error, returning -1 instead of the number of characters that would have
been printed without size restrictions. Therefor msgLen may be wrong,
leading to all kind of funny mistakes further down below... Buffer
overflow included. Work around this by handling the msgLen < 0 case and
adding an explicit terminating \0.
This is another case of "I wonder why nobody has never noticed this",
the GL1 renderers extension string triggered the buffer overflow each
time the game started.
If the vsync is enabled missuse it to slow the client down, e.g.
calculate the target framerate, add an security margin of 20% and
let the vsync handle the rest. This hopefully solves some problems
with frametime spikes. This is an idea by @DanielGibson.
If the vsync is disabled use a simple 1s / fps calculation.
introduced FS_GetFilenameForHandle(fileHandle_t) for this
this helps if a map has been started with "wrong" case, which doesn't
immediately fail if it has been loaded from a pack, but will result
in invalid savegame names that (with case-sensitive FSs) will fail to
load (when going back to a formerly played level)
Until now the UDP download code prohibited downloading of maps from all
pak files. That was some kind of copy protection, without the limitation
demo users could download assets from the full version. Don't apply that
protection for all paks, but only for numbered .pak files.
This could be enhanced by limiting the protection to pak0 to pak2 for
baseq2 and pak0 for both xatrix and rogue.
The signal handler was always fishy since it modified the global process
state but worked on all common platforms. libcurl turns the process into
a multithreaded environment, thus breaking that fragile construction.
After the signal handler was called the global state is inconsistent and
there's a high chance of things going wrong. For example at the net curl
download or when setjmp() is called in Qcommon_Frame().
These are:
- CL_ResetPrecacheCheck(): Resets the precacher, forces it to reevaluate
which assets are available and what needs to be downloaded.
- FS_FileInGamedir(): Checks if a file (and only a real file, not
somthing in a pak) is available in fs_gamedir.
- FS_AddPAKFromGamedir(): Adds a pak in fs_gamedir to the search path.
Lost time is time that we spend but didn't account. So the lost time
doesn't shorten a second (in fact that would mean that we'll lose the
time twice), it lengthen a second. Since has a small but noticeable
impact on timing when running with vsync enabled.
With this change the code matches the comment. While most packet frames
are renderer frames, we must not take the render frame time into account
when calculating the average time spend processing the packet frames. I
dont's think that this change makes any measureable difference since the
packet frame time is just a very small fraction of the renderer frame
time.
We need to reset the avgrenderframetime and avgpacketframetime variables
when we recalculate the average times spend rendering frame and / or
processing package frames. Otherwise the result will be much too high,
leading to lost frames down below. I wonder why nobody complained about
this until now.
While at it lower the security margin from 2% to just 1%. On the one
hand we need a small security margin, because Quake II is not that
precise and out average times spend may be too low. On the other hand
the margin must not be too large, because the bigger the margin is the
bigger is the risk of missing frames... Both 2% and 1% is very small,
in fact often they don't even make a difference because of float ->
int conversations. For example 16 * 0,02 = 0,32 -> cut to 0. But there
are some extrem cases were it matters and my empirical testing showed
that 1% is slighty better then 2%. At least on my system. An it's
better to f*ck up the timing for a small number of frames than missing
a frame. A broken timing is hard to recognize, a missed frame is much
more annoying.
1) Do not increment the frame rate returned by SDL by 1. Incrementing
is unnessecary, more or less up to date versions on Nvidias, AMDs
and Intels GPU driver on relevant platform return an value that's
either correct or rounded up to next integer. And SDL itself also
rounds up to the next integer. At least in current versions. In fact,
incrementing the value by one is harmfull, it messes our internal
timing up and leads to subtile miss predictions. Working around that
in frame.c would add another bunch of fragile magic... So just do
it correctly. If someone still has broken GPU drivers or SDL versions
that are rounding down the could set vid_displayrefreshrate.
2) The calculation of the 5% security margin to pfps in frame.c was
wrong. It didn't take into account that rfps can be slightly wrong
in the first place, e.g. 60 on an 59.95hz display. Correct it by
comparing against rfps including the margin and not the plain value.
Until now the server just called remove() to delete the servers state
from the HDD. That was fine on Linux were UTF-8 is used but failed
silently on Windows in case that the working dir path had some Unicode
characters. Replace remove() by Sys_Remove(), on Linux it's just a
wrapper around remove() on Windows it does a UTF8->UTF-16 conversion
and calls _wremove(). This fixes issue 318.
I've chosen the minimal invasive way for this:
* Import miniz and remove -lz linker flags.
* Create a short header minizconf.h roviding everything we need
originally defined by zconf.h and not provided by miniz.
* Replace zlib.h with miniz.h and minizconf.h.
I've used _wfopen_s() because it's newer and the older variants are said
to throw warning when build with MSVC. But apparently Windows XP hasn't
got that symbol... So just use the normal _wfopen(), MSVC is unsupported
anyways. The may or may not enough to restore Win XP compatibility.