* Take the cache line roundings into account when allocating a hunk.
* Use size_t were apropriate.
* remove some unnecessary and likely broken calls.
Another case of: "How could this ever work"?
We need to reset the avgrenderframetime and avgpacketframetime variables
when we recalculate the average times spend rendering frame and / or
processing package frames. Otherwise the result will be much too high,
leading to lost frames down below. I wonder why nobody complained about
this until now.
While at it lower the security margin from 2% to just 1%. On the one
hand we need a small security margin, because Quake II is not that
precise and out average times spend may be too low. On the other hand
the margin must not be too large, because the bigger the margin is the
bigger is the risk of missing frames... Both 2% and 1% is very small,
in fact often they don't even make a difference because of float ->
int conversations. For example 16 * 0,02 = 0,32 -> cut to 0. But there
are some extrem cases were it matters and my empirical testing showed
that 1% is slighty better then 2%. At least on my system. An it's
better to f*ck up the timing for a small number of frames than missing
a frame. A broken timing is hard to recognize, a missed frame is much
more annoying.
In issue #296 it was pointed out that the menu shows ogg_shuffle
always as disabled, even if it's set to 1. This was an oversight,
the menu code was still checking the ogg_squence cvar removed in
the big OGG/Vorbis refactoring. Update it to match reality.
@DanielGibson pointed out that _SC_PAGESIZE itself is too big,
_SC_PAGESIZE - 1 is correct. Also apply a small performance
optimization by querying _SC_PAGESIZE only once.
This prevents gibs and debris being easily destroyed with the rocket
launcher but leaves enough room for the entities being destroyed by
elevators, doors and the like if necessary.
The gibs and debris per frame must be limited to prevent server mem
map overflows. Until now debris and gibs were handled the same, the
debris spawned by rockets and grenates could prevent the actual gibs
of the killed monster from spawning.
Before this change 20 entities were spawned at max. Now up tp 40
enties can be spawned. This needs some testings.
Fixes issue 323.
A long time ago in 2b4f223 I introduced a small logic change to the
handling of stacked entities. If two entities were standing on each
other the original code set the movement speed of the upper entity
to 0. It would be pushed or dragged by the lower entity. I changed
that in way that the upper entity got the same speed as the lower
entity. With that change it wasn't pushed or dragged but moving on
it's own. I hoped to fix some of the 'elevator hurts player or monster'
bugs.
That hope was wrong, at a later time we quirked all elevators that hurt
the player. Additionally the change lead to physics bugs if entities are
standing on high speed elevators (more than 200 units per seconds). So
revert it.
The actual fix was already committed as part of 69b6e5a. This is just a
little cleanup commit, mainly for documentation purposes.
This closes#320.
When killing the enforcer with one shot (instead of damaging him first
without killing, which will switch to the bloody skin), the skin wasn't
changed. Now it is.
1) Do not increment the frame rate returned by SDL by 1. Incrementing
is unnessecary, more or less up to date versions on Nvidias, AMDs
and Intels GPU driver on relevant platform return an value that's
either correct or rounded up to next integer. And SDL itself also
rounds up to the next integer. At least in current versions. In fact,
incrementing the value by one is harmfull, it messes our internal
timing up and leads to subtile miss predictions. Working around that
in frame.c would add another bunch of fragile magic... So just do
it correctly. If someone still has broken GPU drivers or SDL versions
that are rounding down the could set vid_displayrefreshrate.
2) The calculation of the 5% security margin to pfps in frame.c was
wrong. It didn't take into account that rfps can be slightly wrong
in the first place, e.g. 60 on an 59.95hz display. Correct it by
comparing against rfps including the margin and not the plain value.
Until now the server just called remove() to delete the servers state
from the HDD. That was fine on Linux were UTF-8 is used but failed
silently on Windows in case that the working dir path had some Unicode
characters. Replace remove() by Sys_Remove(), on Linux it's just a
wrapper around remove() on Windows it does a UTF8->UTF-16 conversion
and calls _wremove(). This fixes issue 318.
After some pondering I realised that the changes was stupid. It
introduces some new subtile bugs, for example in some cases SDL
still rounds 59.95hz down to 59hz...
In the old world GLimp_GetRefreshRate() was called once at renderer
startup. Now in the new world with SDL 2.0 only it's called every frame
and thus the target framerate git increased by one every frame... That
lead to subtile timin problem in case that the vsync is enabled.
While here remove the hack added for some Windows GPU drivers by AMD.
Older versions returned 59 on 59.95hz displays, leading to small timing
problems. This is fixed in newer version so we don't need to work around
it. Removing the hack gives us somewhat more overall timing precision.
If someone really needs the hack vid_displayrefresh can be set to 60 to
get the old behaviour.