turns out clock_get_time() uses mach_timespec_t which is very similar
to POSIX timespec_t so we're back to just one Sys_Microseconds() function
with an #ifdef __APPLE__ for the (relatively small) differences
Older versions of OS X don't implement clock_gettime() and no(?) version
seems to implement CLOCK_MONOTONIC. Work around this by implementing an
OS X specific variant of Sys_Microseconds() that relies on Mach APIs
provided by all OS X versions...
While at it alter the generic variant so that CLOCK_MONOTONIC is used
only if it's available. CLOCK_REALTIME as a fallback should be good
enough in most cases.
This is believed to fix issue #239.
We need to take in account that scaling the characters makes them
bigger, thus they need need to be places depending on the scale and not
at a precaclulated position. This should fix issue #247.
Returning 'microseconds / 1000ll' at the first call is wrong, the game
would thing that the first frame too way too much time. For some reason
this wirks in (my) Win10, but breaks on (my) Win7...
The original client used single precision mode on Windows and the
default mode on all other platforms. Most platform (at least OS X,
FreeBSD, NetBSD up to 6.0, OpenBSD and Solaris) set double precision
as default, Linux sets extended double precision... When playing a
network game there're several possibilities:
* Same precision on both sides: This one is okay, of course.
* single precision <-> double precision: This one is okay, too. I guess
this is because the code allows a small deviation between client and
server to work around imprecisions introduced be the network protocol.
* double precision <-> extended double precision: This one is okay,
likely for the same reasons given above.
* single precision <-> extended double precision: This one gives a lot
of misspredictions at client side.
All of these are more or less academic these days. Yamagi Quake II used
the platforms default mode for ages. And both gcc and clang default to
SSE2 math (with double precision as default on all platforms) when
compiling for amd64. So the only reasonable case is Linux/i386 on one
side and the original client or another source port on Windows/i386 at
the other side.
Work around this by forcing the x87 to double precision mode.
Miscframes are coupled to renderframes and are just checking for
renderer changes (very cheap) and advancing CD audio if implemented.
There's no reason not to that at every frame.
Until now the curtime variable was set at every call Sys_*seconds().
That's a little bit unfortunate because calls to that functions are
scattered around the code. Instead set it once every frame in
Qcommon_Frame().
The dedicated server runs at cl_maxfps frames per second. Een with very
large values one server frame can never be shorter than 1 milliseconds.
And the timing doesn't need to be very precise since the network
latency adds a lot of more jitter.
Yes, this duplicates some code. But it's at least 100 times more
readable to have two distinct functions for distinct purposes instead
of about 25 #ifdef.
Like the branch name suggests this was a rather hard ride. Lessons
learned:
* Quake IIs timing is one big clusterfuck.
* The Radeon driver for Windows has a questionable vsync implementation.
What should be done:
* Get rid of the global currenttime.
* DEDICATED_ONLY needs to be refactored.
This commit closes issue #222.
This shouldn't have any noteable impact on timing (besides the machine
is way too slow for Quake II) and saves a lot of CPU cycles. 100% load
vs. 17% load on my desktop.
Having the server in an own timing zone seems to simplify things but
introduces slight timing discrepancies. The most visible effect is that
the game runs a little bit too fast, especially in the first cl_maxfps
frames.
Therefor: Remove timeframes, they're unnecessary. Track the time since
the last (client|server) frame instead and pass it to the client and
server when it's called.
This allows us to implement the global timing without an artificial
brake slowing the game unnecessary down. This is only partial working,
more changes and fixes are coming.
This is a no-op for now. We need this to get a much higher precision
when calculating the frame times. This changes the fixedtime cvar from
milli- to microseconds.
This is the same as the client does for it's realtime. It looks at least
somewhat more correct since it pevents rounding errors. And things are
simplified a litte bit since the server timing is now independent of the
global timing.
The old framecounter had two problems:
* It measured only the time of the current render frame, not the total
time spend between the last and the current render frame. Therefor the
calculated value was too high.
* It was based upon milliseconds and rather inaccurate.
This new frame counter solves both problems. The total time spend
between two render frames is measured and the measurement done in
microseconds.
There're three modes:
* cl_drawfps 1 displayes the average frame rate calculated over the last
60 frames.
* cl_drawfps 2 displays a nice string with minimal framerate, maximum
framerate and average framerate. All three values are calculated over
the last 60 frames.
* cl_drawfps 3 is the same as number 2 but with a second line showing the
raw values.
TODO:
* Discuss if cl_drawfps should be renamed to cl_showfps. All other
status displays are named cl_show*.
While at it remove several unsused drawing functions.
This is the same as the well known Sys_Milliseconds() but like the name
suggests with microsecond precision. To be used in the upcoming new
framecounter.