This can happen in some special cases, like basedir == binarydir. A
common case is Windows. If -basedir isn't given, basedir is set to '.'
and we end up with basedir == binarydir.
In theory adding a dir twice shouldn't be problem, because the first
addition always matches and the second addition is ignored. But I'm
not sure if that always the case in practice.
Working with canonical fullpathes everywhere makes debugging easier.
And it will be used in a later commit to make sure, that each path is
added only once.
* Convert back slashes into forward slashes.
* Make sure that there's no slash at the end.
In theory this is a noop, just making the output somewhat more readable.
This would have prevented the 7.44 release f*ckup. In practise this
shoudl never happen, because there's always baseq2/ but you never know
and it's better to be sure.
This prevents Sys_FindFirst() further down below getting called with
wrong arguments, returning a null pointer. The null pointer crashes
the filesystem. :/
Combs all Raw search paths to find game dirs containing PAK/PK2/PK3
files. If multiple uniquely-named directories exist, then show a "mods"
option on the "Game" menu and allow selection of desired mod on new
eponymous submenu. Includes fix for memory leak of mapnames (read from
"maps.lst") when changing games.
When the "game" directory is changed, clear the current list of maps in
the "start network server" menu so that it will be re-initialized the
next time the menu is accessed.
so it waits for about the time of one frame at 60fps, but independently
of the actual framerate.
Without this fix, wait is broken unless vsync is on, because
CBuf_Execute() is called about 1000 times per second by Qcommon_Frame(),
even if no render- or packet-frame is executed.
(vsync "fixes" this because then we have a real wait at the end of each
renderframe)
We busy-looped for 5000 microseconds, i.e. 5 milliseconds, which reduces
the framerate to < 200fps
I guess the value was copypasted from Sys_Nanosleep() below, but
that was nanoseconds of course..
Anyway, busy-looping for 5 *micro*seconds instead fixes it.
This is one these constructs which makes you wonder how it could ever
work. When querying a cvar by calling Cvar_Get(), the default value
(given in `var_value`) is copied into `cvar_t->default_string`. If a
NULL pointer is given in `var_value`, the NULL pointer is passed to
CopyString() and dereferenced. The game crashes. There's already a NULL
pointer check in the 'cvar wasn't found' branch, but none in the 'cvar
was found' branch... Moving the check to the beginning of the function
isn't an option, because at least lithium2 doesn't implement a NULL
pointer check either. We would just move the crash from the server into
the game.dll. Therefore copy an empty string into
cvar_t->default_string` when a NULL pointer was passed in `cvar_value`
and the cvar was found. Pass the empty string trough `CopyString()` to
get an Z_MAlloc() allocation for it, otherwise we would call `Z_Free()`
on an unallocated object further down below.
Reported by Chris Stewart.
This allows really bug configuration files up to 32k. It would be better
to switch the global buffer to something allocated at runtime, but thats
non-trivial... This change should be save, since the buffers are global
(allocated in the BSS) and not included in savegames nor send over the
network.
Closes#582.
It's an often reported, that the q2ded dedicated server consumes huges
amounts of CPU time. That's because users don't know that `busywait`
must be set to `0`. Since there's no point in using busywaits in the
dedicated server (the network jitter is always bigger than the
jitter caused by nanosleep() and equivalents), just force q2ded to
use nanosleep().
I don't remember why we restricted it to client startup. The original
code executed it everytime when `game` changed... Revert to that
behavior. Look here if some problems come up. ;)
Closes#544.
This is an enhancement to the previous `yield` work:
* Don't enforce `-march=armv8-a` for aarch64 builds, because it is the
initial ARMv8 revision and compilers will either use that or something
newer.
* Refine preprocessor guards around `asm("yield");` so the code isn't
compiled in if unsupported by the current `-march='.
Submitted by @smcv in #535.
YQ2 has a much more precise Sys_Milliseconds() than Vanilla Quake II and
it always start at 0, not some other semirandom value. If the client is
started by `./quake2 +connect example.com" or all user just walk their
way to the menu there's a very high propability that two ore more
clients end up with the same qport... We can't use rand(), because we're
always starting with the same seed, so all clients generate more or less
the same random numbers and we end up in the same situation.
So just call time(). It's portable and more or less in line what the
original code did for Windows. It may be necessary to implement some
kind of fallback logic just in case that still two clients end up with
the same qport, but that's a task for another day.
Closes#537.
C11 _Noreturn is only accepted on function declarations, not on function
pointers, so we can't use it on callbacks like game_import_t.error and
refimport_t.Sys_Error. Use a separate macro for those.
The problematic situation doesn't currently happen because the Makefile
hard-codes -std=gnu99, which disables C11 features; but removing
-std=gnu99 (resulting in the compiler's default, currently gnu11) causes
compilation failures with at least gcc 9.x.
Signed-off-by: Simon McVittie <smcv@debian.org>
Until now CFGDIR was hardcoded to YamagiQ2 on Windows and .yq2 on
everything else. Sometimes it's desireable to have a separate dir
for some tasks, for example whentesting things that introduce new
cvars. Add -cfgdir to override CFGDIR.
This allows for longer arguments passed to cvars, gl_nolerp_list is a
good example for a case were a token length of 128 is too short. Keep
the mapname[] buffer in the server struct at 128 bytes to preserve
savegame compatibility.
Closes#526.
It's been over two years since we merged it into the master. @0lvin has
done a wonderfull job in maintaining it, he fixed a lot of bugs, did a
fair amount of enhancement, etc. There weren't any bug reports for the
last 6 month, it looks like that it's more or less stable right now. So
don't scare the users by calling it experimental.
cvar operations are special commands that allow the programmatic
manipulation of cvar values. 'reset' resets a given cvar to it's
default value, e.g. `reset r_mode' would reset `r_mode` to `4`.
'resetall' resets all known cvar with the exception of `game`.
The code is based upon q2pro.
This is part of issue #414.
At least with MinGW on Windows vsnprintf() treats buffer < size as an
error, returning -1 instead of the number of characters that would have
been printed without size restrictions. Therefor msgLen may be wrong,
leading to all kind of funny mistakes further down below... Buffer
overflow included. Work around this by handling the msgLen < 0 case and
adding an explicit terminating \0.
This is another case of "I wonder why nobody has never noticed this",
the GL1 renderers extension string triggered the buffer overflow each
time the game started.
If the vsync is enabled missuse it to slow the client down, e.g.
calculate the target framerate, add an security margin of 20% and
let the vsync handle the rest. This hopefully solves some problems
with frametime spikes. This is an idea by @DanielGibson.
If the vsync is disabled use a simple 1s / fps calculation.
introduced FS_GetFilenameForHandle(fileHandle_t) for this
this helps if a map has been started with "wrong" case, which doesn't
immediately fail if it has been loaded from a pack, but will result
in invalid savegame names that (with case-sensitive FSs) will fail to
load (when going back to a formerly played level)
Until now the UDP download code prohibited downloading of maps from all
pak files. That was some kind of copy protection, without the limitation
demo users could download assets from the full version. Don't apply that
protection for all paks, but only for numbered .pak files.
This could be enhanced by limiting the protection to pak0 to pak2 for
baseq2 and pak0 for both xatrix and rogue.
The signal handler was always fishy since it modified the global process
state but worked on all common platforms. libcurl turns the process into
a multithreaded environment, thus breaking that fragile construction.
After the signal handler was called the global state is inconsistent and
there's a high chance of things going wrong. For example at the net curl
download or when setjmp() is called in Qcommon_Frame().
These are:
- CL_ResetPrecacheCheck(): Resets the precacher, forces it to reevaluate
which assets are available and what needs to be downloaded.
- FS_FileInGamedir(): Checks if a file (and only a real file, not
somthing in a pak) is available in fs_gamedir.
- FS_AddPAKFromGamedir(): Adds a pak in fs_gamedir to the search path.
Lost time is time that we spend but didn't account. So the lost time
doesn't shorten a second (in fact that would mean that we'll lose the
time twice), it lengthen a second. Since has a small but noticeable
impact on timing when running with vsync enabled.
With this change the code matches the comment. While most packet frames
are renderer frames, we must not take the render frame time into account
when calculating the average time spend processing the packet frames. I
dont's think that this change makes any measureable difference since the
packet frame time is just a very small fraction of the renderer frame
time.
We need to reset the avgrenderframetime and avgpacketframetime variables
when we recalculate the average times spend rendering frame and / or
processing package frames. Otherwise the result will be much too high,
leading to lost frames down below. I wonder why nobody complained about
this until now.
While at it lower the security margin from 2% to just 1%. On the one
hand we need a small security margin, because Quake II is not that
precise and out average times spend may be too low. On the other hand
the margin must not be too large, because the bigger the margin is the
bigger is the risk of missing frames... Both 2% and 1% is very small,
in fact often they don't even make a difference because of float ->
int conversations. For example 16 * 0,02 = 0,32 -> cut to 0. But there
are some extrem cases were it matters and my empirical testing showed
that 1% is slighty better then 2%. At least on my system. An it's
better to f*ck up the timing for a small number of frames than missing
a frame. A broken timing is hard to recognize, a missed frame is much
more annoying.
1) Do not increment the frame rate returned by SDL by 1. Incrementing
is unnessecary, more or less up to date versions on Nvidias, AMDs
and Intels GPU driver on relevant platform return an value that's
either correct or rounded up to next integer. And SDL itself also
rounds up to the next integer. At least in current versions. In fact,
incrementing the value by one is harmfull, it messes our internal
timing up and leads to subtile miss predictions. Working around that
in frame.c would add another bunch of fragile magic... So just do
it correctly. If someone still has broken GPU drivers or SDL versions
that are rounding down the could set vid_displayrefreshrate.
2) The calculation of the 5% security margin to pfps in frame.c was
wrong. It didn't take into account that rfps can be slightly wrong
in the first place, e.g. 60 on an 59.95hz display. Correct it by
comparing against rfps including the margin and not the plain value.
Until now the server just called remove() to delete the servers state
from the HDD. That was fine on Linux were UTF-8 is used but failed
silently on Windows in case that the working dir path had some Unicode
characters. Replace remove() by Sys_Remove(), on Linux it's just a
wrapper around remove() on Windows it does a UTF8->UTF-16 conversion
and calls _wremove(). This fixes issue 318.
I've chosen the minimal invasive way for this:
* Import miniz and remove -lz linker flags.
* Create a short header minizconf.h roviding everything we need
originally defined by zconf.h and not provided by miniz.
* Replace zlib.h with miniz.h and minizconf.h.
I've used _wfopen_s() because it's newer and the older variants are said
to throw warning when build with MSVC. But apparently Windows XP hasn't
got that symbol... So just use the normal _wfopen(), MSVC is unsupported
anyways. The may or may not enough to restore Win XP compatibility.
The 'game' command was more or less functional after the last commit.
We just need to reset the initialGame (renamed to userGivenGame) so we
don't revert back to the old game at server disconnect.
When connecting to a multiplayer game that runs a different mod
("game" cvar) than you are, it didn't load the corresponging configs
from the mod, but saved your changes to the config to the mod's config.
Which is doubly useless.
Now when the "game" cvar is changed, the configs are reloaded (from
the right directories for the mod), and when disconnecting the configs
are written, so the changes you did for a mod while playing MP are saved
before game is reset to the game you started with.
Until now we did an easy calculation to determine the frame timing:
1000000 microseconds (== 1 second) / targetframerate == delay between
frames. This works if the CPU and GPU are fast enough since the time
process to process the frame is negligible. But if one of them is too
slow or the GPU driver takes too long (see issue #277 for an example)
we render too few frames.
Work around this by calculating the average time used to process the
last 60 render oder packet frames and take that into account when
determining the delay between the frames. With this change even my
rotten AMD Radeon and it's broken Windows GL driver is able to hold
the displays famerate (enforced by vsync) just fine.
While here add a 5% security margin to our target packet frame rate if
the vsync is enabled. Just to be sure that we never process more render
than packet frames.
There were two ways to implement this. One was to go with the stuff
already included in minizip and one to implement our own wrapper around
fopen(). This is the second options since @DanielGibson convinced me
that it would be safer.
We can't rely on the game.dll being unicode conformant. Work around
that by changing the current working directory before calling into
the game.dll, pass a non unicode string to it and chang back after
we return.
To be able to pass UTF-8 encoded pathes through cvars both the cvar
subsystem and the command parser would need a fair amount of UTF-8
understanding. And I'm not the poor soul that's going to implement
that. Therefor pass the datadir trough a global variable.
This is done in shared.c so that's available for both the client /
server / renderer and the game. A work around for older game DLL will
be added at a later time.
On Unix platforms unicode is implemented through UTF-8 which is
transparent for applications. But on Windows a UTF-16 dialect is
used which needs alteration at application side. This wrapper is
another step to unicode support on Windows, now we can replace
fopen() by a function that converts our internal UTF-8 pathes to
Windows UTF-16 dialect.
This is a noop for Unix platforms. The Windows build is broken,
the compiler errors out in shared.h. This will be fixed in a
later commit.
Caveats:
* fopen() calls in 3rd party code (std_* and unzip) are not replaced.
This may become a problem. We need to check that.
* In the Unix specific code fopen() isn't replaced since it's not
necessayry.
There's no need to exclude directories from search by flags. In fact
the Unix backend has worked nicely for years without it... Sadly we
can't remove the now superfluous 'canhave' and 'musthave' attributes
from Sys_FindFirst() and Sys_FindNext() since they're defined in
shared.h and may be used from custom game DLLs.
* Remove a bunch of unnecessary functions.
* Reorder functions into logical groups. The orderig is now the same
on Unix and Windows.
While at it add several TODOs to the code. There's not need for special
library loading functions for the game, the Windows backend still uses
a lot of old and fishy DOS functions, etc. All this will be done at a
later time.
There's no need to duplicate machine independent parts of the client
initialization and the main loop for every platform.
While at it remove the nearly empty unix.h header and move Windows
main() into an own file. Not both platform have the same basic layout.
With this renamed cvars can be rewritten when config.cfg is first
loaded. Please note that once this was done older YQ2 versions can't
parse that config.cfg anymore.
gl_maxfps > 1000 breaks things, and cl_maxfps starts to behave weird
at >90, and while up to 125 or so you get the bugfeature of higher
jumping, beyond that things just get even buggier, at some point causing
bugs like #261
The original client used single precision mode on Windows and the
default mode on all other platforms. Most platform (at least OS X,
FreeBSD, NetBSD up to 6.0, OpenBSD and Solaris) set double precision
as default, Linux sets extended double precision... When playing a
network game there're several possibilities:
* Same precision on both sides: This one is okay, of course.
* single precision <-> double precision: This one is okay, too. I guess
this is because the code allows a small deviation between client and
server to work around imprecisions introduced be the network protocol.
* double precision <-> extended double precision: This one is okay,
likely for the same reasons given above.
* single precision <-> extended double precision: This one gives a lot
of misspredictions at client side.
All of these are more or less academic these days. Yamagi Quake II used
the platforms default mode for ages. And both gcc and clang default to
SSE2 math (with double precision as default on all platforms) when
compiling for amd64. So the only reasonable case is Linux/i386 on one
side and the original client or another source port on Windows/i386 at
the other side.
Work around this by forcing the x87 to double precision mode.
Miscframes are coupled to renderframes and are just checking for
renderer changes (very cheap) and advancing CD audio if implemented.
There's no reason not to that at every frame.
Until now the curtime variable was set at every call Sys_*seconds().
That's a little bit unfortunate because calls to that functions are
scattered around the code. Instead set it once every frame in
Qcommon_Frame().
Yes, this duplicates some code. But it's at least 100 times more
readable to have two distinct functions for distinct purposes instead
of about 25 #ifdef.
Having the server in an own timing zone seems to simplify things but
introduces slight timing discrepancies. The most visible effect is that
the game runs a little bit too fast, especially in the first cl_maxfps
frames.
Therefor: Remove timeframes, they're unnecessary. Track the time since
the last (client|server) frame instead and pass it to the client and
server when it's called.
This allows us to implement the global timing without an artificial
brake slowing the game unnecessary down. This is only partial working,
more changes and fixes are coming.
This is a no-op for now. We need this to get a much higher precision
when calculating the frame times. This changes the fixedtime cvar from
milli- to microseconds.
This is the same as the client does for it's realtime. It looks at least
somewhat more correct since it pevents rounding errors. And things are
simplified a litte bit since the server timing is now independent of the
global timing.
This is the same as the well known Sys_Milliseconds() but like the name
suggests with microsecond precision. To be used in the upcoming new
framecounter.