We're printing only the two relevant informations: A download was queued
and a download finished or failed. That's enough to see what's going on
and not too noisy.
...and fix the bugs, that were worked around with that crap, instead.
This removes some corner cases like cancelation of all HTTP downloads
and fallback to UDP if too many 404 errors were generated. If this is
still a problem in reality - for example HTTP servers blocking the
client after too many 404 or even crashing HTTP server - fix the server
and don't force the clients to work around that.
We aren't in 1997 anymore, todays broadband connections are fast enough
to handle multiple large files. This may download some assets twice if
the server provides both a pak file with all assets and the assets as
plain files.
There's no need to parse the HTTP header on our side of things, CURL can
do that. And the old buffer code was overcomplicated, simplify it. While
at it switch to normal malloc().
While the download speed calculations may be useful their implementation
is crappy and the integration into the console is rather fragile. If we
really want to support the progress bar with HTTP downloads this needs
to be reimplemented.
The URL generation logic was buggy, it took the local fs_gamedir into
account when determining the files path on the server. That could have
worked in r1q2 or q2dos since they assume that the executable dir is
also the config dir. But it breaks in YQ2 were the executable dir and
the config dir differ.
These are:
- CL_ResetPrecacheCheck(): Resets the precacher, forces it to reevaluate
which assets are available and what needs to be downloaded.
- FS_FileInGamedir(): Checks if a file (and only a real file, not
somthing in a pak) is available in fs_gamedir.
- FS_AddPAKFromGamedir(): Adds a pak in fs_gamedir to the search path.
This is a very first cut:
* It compiles
* It doesn't crash
What's missing:
* cmake integration
* CURL should be loaded dynamically
* Integration between download code and filesystem
* Likely UTF-8 stuff
* cl_http.c needs cleanup
* Windows support
We're taking indices and converting them to pointer relative to the
hunks base. Yes, that's dirty. Since the indices are stored as 32 bit
values and hunks are generally small using 32 bit pointers is enough,
even on 64 bit platforms. So the code took the size of void* / 2...
See the problem? Yes, that's not a good idea on 32 bit platforms. Bite
the bullet and just take the size of void*. Shouldn't be a problem,
because the indices are the first thing that's loaded and the hunk is
trimmed right after it anyways. If, and just if, we really need each and
every byte in the early stages of map loading we need two cases. One for
64 bit and one for 32 bit.
This fixes issue #346. Kudos to @ricardosdl for the analysis.
Lost time is time that we spend but didn't account. So the lost time
doesn't shorten a second (in fact that would mean that we'll lose the
time twice), it lengthen a second. Since has a small but noticeable
impact on timing when running with vsync enabled.
In src/backends/unix/network.c:
* line 181: Assignment of function parameter has no effect outside the function. Did you forget dereferencing it?
* line 276: The scope of the variable 'tmp' can be reduced.
* line 665: The scope of the variable 'mcast_addr' can be reduced.
* line 665: The scope of the variable 'mcast_port' can be reduced.
* line 666: The scope of the variable 'error' can be reduced.
* line 775: The scope of the variable 'i' can be reduced.
In src/backends/windows/network.c:
* line 186: Assignment of function parameter has no effect outside the function. Did you forget dereferencing it?
* line 287: The scope of the variable 'tmp' can be reduced.
* line 707: The scope of the variable 'mcast_addr' can be reduced.
* line 707: The scope of the variable 'mcast_port' can be reduced.
* line 1049: The scope of the variable 'err' can be reduced.
* line 1163: The scope of the variable 'i' can be reduced.
In src/client/menu/menu.c
arrayIndexOutOfBounds:
* line 1921: Array 'creditsIndex[256]' accessed at index 256, which is out of bounds.
variableScope:
* line 332: The scope of the variable 'item' can be reduced.
* line 533: The scope of the variable 'x' can be reduced.
* line 533: The scope of the variable 'y' can be reduced.
* line 838: The scope of the variable 'b' can be reduced.
* line 864: The scope of the variable 'b' can be reduced.
* line 1910: The scope of the variable 'n' can be reduced.
* line 2199: The scope of the variable 'str' can be reduced.
* line 2812: The scope of the variable 'length' can be reduced.
* line 2813: The scope of the variable 'i' can be reduced.
* line 3838: The scope of the variable 'c' can be reduced.
* line 4112: The scope of the variable 'scratch' can be reduced.
* line 4181: The scope of the variable 'i' can be reduced.
* line 4345: The scope of the variable 's' can be reduced.
In src/game/player/hud.c
arrayIndexOutOfBounds:
* line 132: Array itemlist[43] accessed at index 255 which is out of bounds.
Itemlist assigned only once, and has only 43 items, better ignore unexisted items.
variableScope:
* line 82: The scope of the variable 'n' can be reduced.
* line 217: The scope of the variable 'x' can be reduced.
* line 217: The scope of the variable 'y' can be reduced.
* line 218: The scope of the variable 'cl' can be reduced.
* line 583: The scope of the variable 'cl' can be reduced.
With this change the code matches the comment. While most packet frames
are renderer frames, we must not take the render frame time into account
when calculating the average time spend processing the packet frames. I
dont's think that this change makes any measureable difference since the
packet frame time is just a very small fraction of the renderer frame
time.
I guess that these calls were added since dlopen() might ignore the
global LD_LIBRARY_PATH if EUID != UID. In Yamagi Quake II this isn't
a problem because we're enforcing EUID == UID in main() and don't use
LD_LIBRARY_PATH anyways.
These calls broke the jack audio server and maybe some other programs.
The problem was debugged by @ScrelliCopter. This commit closes issue
#270.
* Take the cache line roundings into account when allocating a hunk.
* Use size_t were apropriate.
* remove some unnecessary and likely broken calls.
Another case of: "How could this ever work"?
We need to reset the avgrenderframetime and avgpacketframetime variables
when we recalculate the average times spend rendering frame and / or
processing package frames. Otherwise the result will be much too high,
leading to lost frames down below. I wonder why nobody complained about
this until now.
While at it lower the security margin from 2% to just 1%. On the one
hand we need a small security margin, because Quake II is not that
precise and out average times spend may be too low. On the other hand
the margin must not be too large, because the bigger the margin is the
bigger is the risk of missing frames... Both 2% and 1% is very small,
in fact often they don't even make a difference because of float ->
int conversations. For example 16 * 0,02 = 0,32 -> cut to 0. But there
are some extrem cases were it matters and my empirical testing showed
that 1% is slighty better then 2%. At least on my system. An it's
better to f*ck up the timing for a small number of frames than missing
a frame. A broken timing is hard to recognize, a missed frame is much
more annoying.
In issue #296 it was pointed out that the menu shows ogg_shuffle
always as disabled, even if it's set to 1. This was an oversight,
the menu code was still checking the ogg_squence cvar removed in
the big OGG/Vorbis refactoring. Update it to match reality.
@DanielGibson pointed out that _SC_PAGESIZE itself is too big,
_SC_PAGESIZE - 1 is correct. Also apply a small performance
optimization by querying _SC_PAGESIZE only once.
This prevents gibs and debris being easily destroyed with the rocket
launcher but leaves enough room for the entities being destroyed by
elevators, doors and the like if necessary.
The gibs and debris per frame must be limited to prevent server mem
map overflows. Until now debris and gibs were handled the same, the
debris spawned by rockets and grenates could prevent the actual gibs
of the killed monster from spawning.
Before this change 20 entities were spawned at max. Now up tp 40
enties can be spawned. This needs some testings.
Fixes issue 323.
A long time ago in 2b4f223 I introduced a small logic change to the
handling of stacked entities. If two entities were standing on each
other the original code set the movement speed of the upper entity
to 0. It would be pushed or dragged by the lower entity. I changed
that in way that the upper entity got the same speed as the lower
entity. With that change it wasn't pushed or dragged but moving on
it's own. I hoped to fix some of the 'elevator hurts player or monster'
bugs.
That hope was wrong, at a later time we quirked all elevators that hurt
the player. Additionally the change lead to physics bugs if entities are
standing on high speed elevators (more than 200 units per seconds). So
revert it.
The actual fix was already committed as part of 69b6e5a. This is just a
little cleanup commit, mainly for documentation purposes.
This closes#320.
When killing the enforcer with one shot (instead of damaging him first
without killing, which will switch to the bloody skin), the skin wasn't
changed. Now it is.
1) Do not increment the frame rate returned by SDL by 1. Incrementing
is unnessecary, more or less up to date versions on Nvidias, AMDs
and Intels GPU driver on relevant platform return an value that's
either correct or rounded up to next integer. And SDL itself also
rounds up to the next integer. At least in current versions. In fact,
incrementing the value by one is harmfull, it messes our internal
timing up and leads to subtile miss predictions. Working around that
in frame.c would add another bunch of fragile magic... So just do
it correctly. If someone still has broken GPU drivers or SDL versions
that are rounding down the could set vid_displayrefreshrate.
2) The calculation of the 5% security margin to pfps in frame.c was
wrong. It didn't take into account that rfps can be slightly wrong
in the first place, e.g. 60 on an 59.95hz display. Correct it by
comparing against rfps including the margin and not the plain value.
Until now the server just called remove() to delete the servers state
from the HDD. That was fine on Linux were UTF-8 is used but failed
silently on Windows in case that the working dir path had some Unicode
characters. Replace remove() by Sys_Remove(), on Linux it's just a
wrapper around remove() on Windows it does a UTF8->UTF-16 conversion
and calls _wremove(). This fixes issue 318.
After some pondering I realised that the changes was stupid. It
introduces some new subtile bugs, for example in some cases SDL
still rounds 59.95hz down to 59hz...
In the old world GLimp_GetRefreshRate() was called once at renderer
startup. Now in the new world with SDL 2.0 only it's called every frame
and thus the target framerate git increased by one every frame... That
lead to subtile timin problem in case that the vsync is enabled.
While here remove the hack added for some Windows GPU drivers by AMD.
Older versions returned 59 on 59.95hz displays, leading to small timing
problems. This is fixed in newer version so we don't need to work around
it. Removing the hack gives us somewhat more overall timing precision.
If someone really needs the hack vid_displayrefresh can be set to 60 to
get the old behaviour.
Until now we just called OGG_Stop() as soon as we read the last samples
from a Vorbis files. OGG_Stop() flushed all unplayed samples (about 12
seconds of playback) from the OpenAL playback queue... Instead just set
our internal state of STOPPED, open the next file and be done.
Make sure that the window is destroyed at gl renderer shutdown and
recreated by the soft renderer. Don't deinitialize SDL in the
softrenderer, that's done by vid.c. And make sure that we start the soft
renderer with a clean GL state.
I've chosen the minimal invasive way for this:
* Import miniz and remove -lz linker flags.
* Create a short header minizconf.h roviding everything we need
originally defined by zconf.h and not provided by miniz.
* Replace zlib.h with miniz.h and minizconf.h.
The input system backend was once used in the client and the renderers,
but for some years now it has been an integral part of the client only.
Move it there.
The last commits did some bigger changes to the interaction between the
GL renderers and the client. The code is now SDL 2.0 conformant, window
and context creation are strictly distinct operations. SDL is only
initialized when necessary. Since this broke the client <-> renderer
API, bump it's version.
There a lot of things left to do for dark and cold winter evenings:
* The software renderer implements it's own window handling and
reinitialized SDL whenever vid_restart is called. This is highly
problematic.
* vid_fullscreen is abused to communicate changes to renderer config
throughout the code. That's a very ugly, messy and potential very
problematic hack. But not easy to remove.
* Some funtion calls between the client and the renderer are
unnecessary.
The changes to the client <-> renderer interaction fixed issue #302.
In the old world we deinitialized and reinitialized SDL each time we
restarted or changed the renderer. That would clear the whole GL state.
In the new world we let SDL running and just recreate the windows. In
some cases parts of the old renderers state would leak into the new
renderer, leading to strange problems.
* Sync both files as much as possible.
* Another round of general cleanup.
* Fix stencil tests.
* Simplify gamma handling, hardware gamma is now default.
* Support new client <-> renderer API.
* Another round of general cleanup.
* Introduce gl3_libgl cvar to force a libGL.
* Fix stencil buffer tests.
* Further untangle window <-> context stuff.
The window is now fully at client side, the context at renderer side.
This is another break of the renderer API. And at least GL1 needs to
track this, it's broken for now.
* Even more syntax and code style fixes.
* Rename functions to match their actual purpose.
* Fix comments.
* SDL initialization and shutdown is now client side only. With
SDL 1.2 finally gone there's no need to involve the renderers
in it.
This breaks the client <-> renderer API. I haven't bumped the API
version with this commit because there're likely more changes when
I'm going through the renderer side of things. The VID backend also
needs a lot of love...
It might be a good idea to move this SDL backend files into the client
and rename them. We'll decide that at a later time.
* Some globals could be made static.
* Add comments were appropriate.
* And format the file to one coding stile. What is so hard with
keeping to one style?! MY IDE is even able to interfere the
style from existing code...
FreeBSD has supported printing backtraces for years. The API is the same
as on Linux, the only difference is that libexecinfo must be linked as a
seperate library. Since the last FreeBSD version with backtrace support
(FreeBSD 9.3) went out of support some time ago unconditionally enable
the printing.
I've used _wfopen_s() because it's newer and the older variants are said
to throw warning when build with MSVC. But apparently Windows XP hasn't
got that symbol... So just use the normal _wfopen(), MSVC is unsupported
anyways. The may or may not enough to restore Win XP compatibility.
Until now, likely since we first introduced OGG/Vorbis playback 9 years
ago, in about 50% of all cases OGG_PlayTrack() was never called if
cd_shuffle was set 1, resulting in missing background music. Add the
missing call. :)
I wonder why I didn't catch this in sunday. For some reason a "make
clean ; make" cycle was necessary. Maybe a corner case that the header
dependencies didn't catch?
We're going to remove support for SDL 1.2 shortly after the next
release. Give the last remaining users a very clear warning about
it, error out at force them to edit the code.
this happens when you just copypaste and adapt r_lefthand
also did some minor changes to R_AliasDrawModel in the soft renderer
to make sure alias[xy]scale is reset properly in the early out cases
At high 'fov' values the weapon looked quite distorted.
Now it's rendered with an independent FOV, which looks better.
Note that the 'fov' cvar sets fov_x, while this is based on fov_y
(which is calculated from fov_x), so it's indeed different values:
r_gunfov seems to correspond to fov 90.
We use r_gunfov 80 as default, because it looks better.
The old code was working only when the client was connected to a local
server. The 'newgame' executed by the menu expands to a 'map', loading
a map ends in SV_InitGame() which calls CL_Drop() on the local client.
That calls CL_Disconnect() and everything is okay.
When the client is already connected to a remote server and no local
server is running the 'map' command spawns a new local server. This
new server thinks "Hey, I'm a new local server and no one is connected
to me. Let's pull the client in!". So it pull the already connected
client onto a new server without disconnecting, smashing it's state.
And everything goes down in flames.
The correct way would be to execute a 'disconnect' right before the
'newgame'. But the 'disconnect' cmd calls CL_Disconnect_f that throws
an ERR_DROP. ERR_DROP is implememted through a longjump(), jumping
around puts the process internal state in ashes... So bite the bullet
and add another hack: Call CL_Disconnect() before executing 'newgame'.
The 'game' command was more or less functional after the last commit.
We just need to reset the initialGame (renamed to userGivenGame) so we
don't revert back to the old game at server disconnect.
When connecting to a multiplayer game that runs a different mod
("game" cvar) than you are, it didn't load the corresponging configs
from the mod, but saved your changes to the config to the mod's config.
Which is doubly useless.
Now when the "game" cvar is changed, the configs are reloaded (from
the right directories for the mod), and when disconnecting the configs
are written, so the changes you did for a mod while playing MP are saved
before game is reset to the game you started with.
Until now we did an easy calculation to determine the frame timing:
1000000 microseconds (== 1 second) / targetframerate == delay between
frames. This works if the CPU and GPU are fast enough since the time
process to process the frame is negligible. But if one of them is too
slow or the GPU driver takes too long (see issue #277 for an example)
we render too few frames.
Work around this by calculating the average time used to process the
last 60 render oder packet frames and take that into account when
determining the delay between the frames. With this change even my
rotten AMD Radeon and it's broken Windows GL driver is able to hold
the displays famerate (enforced by vsync) just fine.
While here add a 5% security margin to our target packet frame rate if
the vsync is enabled. Just to be sure that we never process more render
than packet frames.
Modern LCD displays often haven't itegral refresh rates like 60hz but
fractional ones like 59.95hz. SDL communicates the refresh rate as
integer. On X11 the rate is rounded up or down with round(), but on
Windows it's (at least on my system with an AMD Radeon) truncated...
So on an 59.95hz display it's just 59hz, Quake II renders 0.95 frames
too few and the user sees microstutters.
And return the actual / requested display frame rate increased by one
to work around inaccuracies in Quake IIs internal timing. It should be
a problem if we're running a little bit too fast.
This is belived to fix at least a part of issue #277.
Refreshrate 2
The problem was that the cvars were only initialized (with CVar_Get())
if you opened the address book menu.
So if you start (and possibly run) and quit the game /without/ opening
that menu (or at least the "join network server" menu), the game will
not save those cvars to the config when it next writes it.
To prevent this, *always* initialize the cvars in M_Init().
There were two ways to implement this. One was to go with the stuff
already included in minizip and one to implement our own wrapper around
fopen(). This is the second options since @DanielGibson convinced me
that it would be safer.
We can't rely on the game.dll being unicode conformant. Work around
that by changing the current working directory before calling into
the game.dll, pass a non unicode string to it and chang back after
we return.
To be able to pass UTF-8 encoded pathes through cvars both the cvar
subsystem and the command parser would need a fair amount of UTF-8
understanding. And I'm not the poor soul that's going to implement
that. Therefor pass the datadir trough a global variable.
This is done in shared.c so that's available for both the client /
server / renderer and the game. A work around for older game DLL will
be added at a later time.
On Unix platforms unicode is implemented through UTF-8 which is
transparent for applications. But on Windows a UTF-16 dialect is
used which needs alteration at application side. This wrapper is
another step to unicode support on Windows, now we can replace
fopen() by a function that converts our internal UTF-8 pathes to
Windows UTF-16 dialect.
This is a noop for Unix platforms. The Windows build is broken,
the compiler errors out in shared.h. This will be fixed in a
later commit.
Caveats:
* fopen() calls in 3rd party code (std_* and unzip) are not replaced.
This may become a problem. We need to check that.
* In the Unix specific code fopen() isn't replaced since it's not
necessayry.
With this commit YQ2 is able to start and run on ReFS volumes. :) At
least as long as neither the binary path, the game data path nor the
path to the users home directory contain anything but ASCII characters.
Please note: This make break some corner cases with hore directories
containting unicode characters. They worked until now by pure luck.
A better solution providing full unicode support will be committed
in the next few days.
This brings at least two big advantages:
* No more 8.3 filename fuckups. Until know base0.pak and base0.pak_bak
was the same file for Quake II because only the first 3 characters of
the file extension were taken into account.
* Search pathes can contain any Unicode character.
There's no need to exclude directories from search by flags. In fact
the Unix backend has worked nicely for years without it... Sadly we
can't remove the now superfluous 'canhave' and 'musthave' attributes
from Sys_FindFirst() and Sys_FindNext() since they're defined in
shared.h and may be used from custom game DLLs.
* Remove a bunch of unnecessary functions.
* Reorder functions into logical groups. The orderig is now the same
on Unix and Windows.
While at it add several TODOs to the code. There's not need for special
library loading functions for the game, the Windows backend still uses
a lot of old and fishy DOS functions, etc. All this will be done at a
later time.
There's no need to duplicate machine independent parts of the client
initialization and the main loop for every platform.
While at it remove the nearly empty unix.h header and move Windows
main() into an own file. Not both platform have the same basic layout.
While building the wrapper as a console application is completely fine
there're some advantages by creating a "real" Windows GUI Application:
* Console applications always spawn an annoying console window.
* Windows GUI applications seem to have a much lower chance to trigger
my new best friend, the Windows Defender. As a console application
quake.exe triggered every time I started it, as Windows GUI
application not only once.
Use WinMain() instead of wWinMain() because MinGW doesn't know about
the later and it doesn't matter anyways.
libSDLmain.a has to be linked and must run anyways. So there's no need
for us to reinvent the wheel, just rely on SDLs process setup, argument
parsing, message handling and so on. As a nice side effect this may fix
some strange bugs related to message handling and argument parsing...
My modifications (jpeg writing and supplying zlib compressor for better
PNG compression) have been merged upstream, so from now on updates
should be easy and painless.
(Sean renamed my stbi_png_level to stbi_write_png_compression_level)
Until now we had 3 modes:
0 -> never grab the mouse.
1 -> always grab the mouse
2 -> ungrab the mouse if the game is windowed and the console or the
menu is opened or a cinematic is playing.
The 3rd mode is the same as the 2nd one, but without the "game is
windowed" constrained. Please note that release the mouse grab in
fullscreen may have side effects like the game loosing focus and being
unable to regain it. Especially under X11.
This was requested by @prg318 in issue #271.
Loop 'for ( i = 0; i < 3; i++ )' sets values to vtx[0..2]. So next index must be 3(instead 4) and
loop 'for ( i = 16; i >= 0; i-- )' will set vtx[3..(18*3-1)].
=====
src/client/refresh/gl/r_light.c: In function ‘R_RenderDlight’:
src/client/refresh/gl/r_light.c:76:21: warning: iteration 16 invokes undefined behavior [-Waggressive-loop-optimizations]
vtx[index_vtx++] = light->origin [ j ] + vright [ j ] * cos( a ) * rad
~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ vup [ j ] * sin( a ) * rad;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/client/refresh/gl/r_light.c:65:2: note: within this loop
for ( i = 16; i >= 0; i-- )
^~~
=====
Apparently something (possibly nvidia's driver) on some windows
installations has some stupid application profile for quake2.exe that
breaks mouse input if the console has been opened..
Our workaround is to rename quake2.exe to yquake2.exe and provide a wrapper
quake2.exe that just calls the real one for backwards compatibility.
This is the source of that wrapper.
This is mostly the same approach as in GL1. I'm not quite sure if the
software rasterizer can work with all aspects and the like but I wasn't
able to crash it by trying several random resultions.
With this renamed cvars can be rewritten when config.cfg is first
loaded. Please note that once this was done older YQ2 versions can't
parse that config.cfg anymore.
gl_maxfps > 1000 breaks things, and cl_maxfps starts to behave weird
at >90, and while up to 125 or so you get the bugfeature of higher
jumping, beyond that things just get even buggier, at some point causing
bugs like #261
If too many of these sounds are started in one frame (for example if the
player shoots with the super shotgun into the power screen of a Brain)
things get too loud and OpenAL is forced to scale the volume of several
other sounds and the background music down. That leads to a noticable
and annoying drop in the overall volume.
Work around that by limiting the number of sounds started. 16 was
choosen by empirical testing.
This was so broken... Casting the type of an array to silence a
warning... It worked on x86, of course. But gave a SIGBUS on ARM.
Do it right, cast / copy the content of the array into another
array of the correct type. Yeah.
This fixes issue #231.
There're two possible problems with the calculation of the number of
sound buffers for Vorbis if OpenAL is in use:
* We assume that the (more or less) maximum number buffers is allocated
during map load. This is not correct if in a multiplayer game a lot of
custom models with custom sound connect at a later time.
* 64 buffers (about 3 seconds worth of music) may be too low in some
situations.
Work around this by recalculating the number of buffers if necessary.
We're now reserving about 256 (== 12 seconds) buffers.
This may fix issue #252.
turns out clock_get_time() uses mach_timespec_t which is very similar
to POSIX timespec_t so we're back to just one Sys_Microseconds() function
with an #ifdef __APPLE__ for the (relatively small) differences
Older versions of OS X don't implement clock_gettime() and no(?) version
seems to implement CLOCK_MONOTONIC. Work around this by implementing an
OS X specific variant of Sys_Microseconds() that relies on Mach APIs
provided by all OS X versions...
While at it alter the generic variant so that CLOCK_MONOTONIC is used
only if it's available. CLOCK_REALTIME as a fallback should be good
enough in most cases.
This is believed to fix issue #239.
We need to take in account that scaling the characters makes them
bigger, thus they need need to be places depending on the scale and not
at a precaclulated position. This should fix issue #247.
Returning 'microseconds / 1000ll' at the first call is wrong, the game
would thing that the first frame too way too much time. For some reason
this wirks in (my) Win10, but breaks on (my) Win7...
The original client used single precision mode on Windows and the
default mode on all other platforms. Most platform (at least OS X,
FreeBSD, NetBSD up to 6.0, OpenBSD and Solaris) set double precision
as default, Linux sets extended double precision... When playing a
network game there're several possibilities:
* Same precision on both sides: This one is okay, of course.
* single precision <-> double precision: This one is okay, too. I guess
this is because the code allows a small deviation between client and
server to work around imprecisions introduced be the network protocol.
* double precision <-> extended double precision: This one is okay,
likely for the same reasons given above.
* single precision <-> extended double precision: This one gives a lot
of misspredictions at client side.
All of these are more or less academic these days. Yamagi Quake II used
the platforms default mode for ages. And both gcc and clang default to
SSE2 math (with double precision as default on all platforms) when
compiling for amd64. So the only reasonable case is Linux/i386 on one
side and the original client or another source port on Windows/i386 at
the other side.
Work around this by forcing the x87 to double precision mode.
Miscframes are coupled to renderframes and are just checking for
renderer changes (very cheap) and advancing CD audio if implemented.
There's no reason not to that at every frame.
Until now the curtime variable was set at every call Sys_*seconds().
That's a little bit unfortunate because calls to that functions are
scattered around the code. Instead set it once every frame in
Qcommon_Frame().