We need to take in account that scaling the characters makes them
bigger, thus they need need to be places depending on the scale and not
at a precaclulated position. This should fix issue #247.
The old framecounter had two problems:
* It measured only the time of the current render frame, not the total
time spend between the last and the current render frame. Therefor the
calculated value was too high.
* It was based upon milliseconds and rather inaccurate.
This new frame counter solves both problems. The total time spend
between two render frames is measured and the measurement done in
microseconds.
There're three modes:
* cl_drawfps 1 displayes the average frame rate calculated over the last
60 frames.
* cl_drawfps 2 displays a nice string with minimal framerate, maximum
framerate and average framerate. All three values are calculated over
the last 60 frames.
* cl_drawfps 3 is the same as number 2 but with a second line showing the
raw values.
TODO:
* Discuss if cl_drawfps should be renamed to cl_showfps. All other
status displays are named cl_show*.
While at it remove several unsused drawing functions.
This is the same as the well known Sys_Milliseconds() but like the name
suggests with microsecond precision. To be used in the upcoming new
framecounter.
I really don't see why this constraint was ever necessary. It leads to
one line of pixels not rendered either at the bottom or the right edge
of the screen. In GL1 for whatever reasons this line is just black, in
GL3 garbage is drawn.
In SCR_DrawFieldScaled() the HUD scale factor wasn't taken into account
when calculating the screen area affected by the change. Therefor wrong
coordinates were passed to SCR_AddDirtyPoint() and a part of the changed
area wasn't marked dirty, leading to render artifacts.
This bug was present since HUD scaling was first introduced.
This is largely based upon the cl_async 1 mode from KMQuake2, which in
turn is based upon r1q2. The origins of this code may be even older...
Different to KMQuake2 the asynchonous mode is not optional, the client
is always asynchonous. Since we're mainly integrating this rather
fundamental change to simplify the complex internal timing between
client, server and refresh, there's no point in keeping it optional.
The old cl_maxfps cvar controls the network frames. 30 frames should be
enough, even Q3A hasn't more. The new gl_maxfps cvar controls the render
frames. It's set to 95 fps by default to avoid possible remnant of the
famous 125hz bug.
This clamps the UI scale, limiting the relative size of the UI
elements to what they would be at scale 1 in a 320x240 resolution.
Allowing bigger scales is not useful, and would make it possible
for the user to shoot him-/herself in the foot by setting a
"too big" UI scale value in the menu. (Since this would mess up
the menu, it could be hard te recover from for a casual user.)
This function is only used in cl_view.c, so no need for external
declaration. Reimplement it in a sane and on all platform 64 bit
clean way. This allows us to finally remove the horible INT macro.
Without this change the width of the render windows was required to be a
multiple of 8, making it unable to use strange resolutions like 1366x768.
This change is based upon an idea submitted by "tmcp" in pull request
27.
Revert "change several strcat calls to Q_strlcat calls"
This reverts commit ab879f1bc7.
Revert "change (v)sprintf calls to (v)snprintf calls"
This reverts commit b46e210d76.