With this renamed cvars can be rewritten when config.cfg is first
loaded. Please note that once this was done older YQ2 versions can't
parse that config.cfg anymore.
gl_maxfps > 1000 breaks things, and cl_maxfps starts to behave weird
at >90, and while up to 125 or so you get the bugfeature of higher
jumping, beyond that things just get even buggier, at some point causing
bugs like #261
If too many of these sounds are started in one frame (for example if the
player shoots with the super shotgun into the power screen of a Brain)
things get too loud and OpenAL is forced to scale the volume of several
other sounds and the background music down. That leads to a noticable
and annoying drop in the overall volume.
Work around that by limiting the number of sounds started. 16 was
choosen by empirical testing.
This was so broken... Casting the type of an array to silence a
warning... It worked on x86, of course. But gave a SIGBUS on ARM.
Do it right, cast / copy the content of the array into another
array of the correct type. Yeah.
This fixes issue #231.
There're two possible problems with the calculation of the number of
sound buffers for Vorbis if OpenAL is in use:
* We assume that the (more or less) maximum number buffers is allocated
during map load. This is not correct if in a multiplayer game a lot of
custom models with custom sound connect at a later time.
* 64 buffers (about 3 seconds worth of music) may be too low in some
situations.
Work around this by recalculating the number of buffers if necessary.
We're now reserving about 256 (== 12 seconds) buffers.
This may fix issue #252.
turns out clock_get_time() uses mach_timespec_t which is very similar
to POSIX timespec_t so we're back to just one Sys_Microseconds() function
with an #ifdef __APPLE__ for the (relatively small) differences
Older versions of OS X don't implement clock_gettime() and no(?) version
seems to implement CLOCK_MONOTONIC. Work around this by implementing an
OS X specific variant of Sys_Microseconds() that relies on Mach APIs
provided by all OS X versions...
While at it alter the generic variant so that CLOCK_MONOTONIC is used
only if it's available. CLOCK_REALTIME as a fallback should be good
enough in most cases.
This is believed to fix issue #239.
We need to take in account that scaling the characters makes them
bigger, thus they need need to be places depending on the scale and not
at a precaclulated position. This should fix issue #247.
Returning 'microseconds / 1000ll' at the first call is wrong, the game
would thing that the first frame too way too much time. For some reason
this wirks in (my) Win10, but breaks on (my) Win7...
The original client used single precision mode on Windows and the
default mode on all other platforms. Most platform (at least OS X,
FreeBSD, NetBSD up to 6.0, OpenBSD and Solaris) set double precision
as default, Linux sets extended double precision... When playing a
network game there're several possibilities:
* Same precision on both sides: This one is okay, of course.
* single precision <-> double precision: This one is okay, too. I guess
this is because the code allows a small deviation between client and
server to work around imprecisions introduced be the network protocol.
* double precision <-> extended double precision: This one is okay,
likely for the same reasons given above.
* single precision <-> extended double precision: This one gives a lot
of misspredictions at client side.
All of these are more or less academic these days. Yamagi Quake II used
the platforms default mode for ages. And both gcc and clang default to
SSE2 math (with double precision as default on all platforms) when
compiling for amd64. So the only reasonable case is Linux/i386 on one
side and the original client or another source port on Windows/i386 at
the other side.
Work around this by forcing the x87 to double precision mode.
Miscframes are coupled to renderframes and are just checking for
renderer changes (very cheap) and advancing CD audio if implemented.
There's no reason not to that at every frame.
Until now the curtime variable was set at every call Sys_*seconds().
That's a little bit unfortunate because calls to that functions are
scattered around the code. Instead set it once every frame in
Qcommon_Frame().
The dedicated server runs at cl_maxfps frames per second. Een with very
large values one server frame can never be shorter than 1 milliseconds.
And the timing doesn't need to be very precise since the network
latency adds a lot of more jitter.
Yes, this duplicates some code. But it's at least 100 times more
readable to have two distinct functions for distinct purposes instead
of about 25 #ifdef.
This shouldn't have any noteable impact on timing (besides the machine
is way too slow for Quake II) and saves a lot of CPU cycles. 100% load
vs. 17% load on my desktop.
Having the server in an own timing zone seems to simplify things but
introduces slight timing discrepancies. The most visible effect is that
the game runs a little bit too fast, especially in the first cl_maxfps
frames.
Therefor: Remove timeframes, they're unnecessary. Track the time since
the last (client|server) frame instead and pass it to the client and
server when it's called.
This allows us to implement the global timing without an artificial
brake slowing the game unnecessary down. This is only partial working,
more changes and fixes are coming.
This is a no-op for now. We need this to get a much higher precision
when calculating the frame times. This changes the fixedtime cvar from
milli- to microseconds.
This is the same as the client does for it's realtime. It looks at least
somewhat more correct since it pevents rounding errors. And things are
simplified a litte bit since the server timing is now independent of the
global timing.
The old framecounter had two problems:
* It measured only the time of the current render frame, not the total
time spend between the last and the current render frame. Therefor the
calculated value was too high.
* It was based upon milliseconds and rather inaccurate.
This new frame counter solves both problems. The total time spend
between two render frames is measured and the measurement done in
microseconds.
There're three modes:
* cl_drawfps 1 displayes the average frame rate calculated over the last
60 frames.
* cl_drawfps 2 displays a nice string with minimal framerate, maximum
framerate and average framerate. All three values are calculated over
the last 60 frames.
* cl_drawfps 3 is the same as number 2 but with a second line showing the
raw values.
TODO:
* Discuss if cl_drawfps should be renamed to cl_showfps. All other
status displays are named cl_show*.
While at it remove several unsused drawing functions.
This is the same as the well known Sys_Milliseconds() but like the name
suggests with microsecond precision. To be used in the upcoming new
framecounter.
For some fucking reason, if you set an unsupported
SDL_GL_MULTISAMPLESAMPLES value on Windows (at least Win10 with Intel GPU
drivers, there 16 is unsupported), creating the Window and OpenGL context
will succeed, but you'll get Microsofts stupid GDI OpenGL software
implementation that only supports OpenGL 1.1.
Before these fixes, the GL3 renderer would just crash and the GL1 renderer
would fail to load, which caused the game to run in the background:
No Window, no Input, but sound was playing..
Now this problem should be handled properly and if initialization fails,
the rendering backend will be considered not working, and it will
try the gl1 backend next, and if that also fails it'll give up and exit
the game.
Until now the video menu enforced:
* fov set to 90 and horplus set to 1
* fov set to something other than 90 and horplus to 0
If the user hat configured another configuration through the console the
menu would reset it, even if only unrelated changes are applied. With
this change horplus is ignored by the menu and only fov is altered. The
rationale behind this is that most users want horplus enabled and all
others can disable it through the console.
This is believed to fix issue #225.
While here reimplement the same hack for baseq2/players, lost somewhere
on the way. This is just another searchpath f*ckup. For some reasons
paks have a higher priority than plain directories. We do not want that
for the maps.lst and players/ since id Software decided to put updated
versions of them directly into baseq2/...
This closes issue #217.
SDL_WINDOW_FULLSCREEN changes the display resolution if the requested
resolution is different to the actual resultion. SDL_WINDOW_FULLSCREEN_
DESKTOP doesn't do that, it places a smaller or bigger render area
somewhere inside the fullscreen area. This is somewhat nicer with modern
high resolution flatscreens.
This commit changes vid_fullscreen 1 from SDL_WINDOW_FULLSCREEN to
SDL_WINDOW_FULLSCREEN_DESKTOP. Additional vid_fullscreen 2 is
implemented, it uses SDL_WINDOW_FULLSCREEN to create the fullscreen
area.
TL;DR: Use vid_fullscreen 1 to keep the current resolution or use
vid_fullscreen 2 to switch the resolution.
Implementation details: The whole fullscreen stuff is a horrible mess.
Like generations of hackers before me I'm not desperated enough to clean
it up. GLimp_InitGraphics() is modified to take the fullscreen mode as
an integer and not as a boolean. That's a change to the renderer API.
In GLimp_InitGraphics() the needed SDL fullscreen mode flag is
determined once at the top and just used further down below. That saves
dome SDL1 <-> SDL2 compatibility cruft. IsFullscreen() was modified to
return the actual fullscreen mode and not just if fullscreen is enabled.
Several platforms - OpenBSD being a prominent example - don't provide a
way to get the executable path. Don't abort, just return the current
dir ./ executable dir. This is just a work around, of course. The user
needs to supply a script that calls ./quake2 in the correct directory.
The big problem with the old implementation was that stdout.txt and
stderr.txt on Windows became available when nearly all the low level
initialization was already done. Regardless if the client was in
normal or in portable mode.
Solve this by scanning the command line for the string '-portable'. If
it's not found, stdout and stderr are redirected as early as possible.
If found the global variable (*sigh*) is_portable is set to true. It's
evaluated later on to set the cvar 'portable', which in turn is used
be the filesystem to decide if the home directory should be added to
the search path.
Maybe we should remove the cvar and stick to the global variable.
While at it change the maximum path length for qconsole.log from
MAX_QPATH to MAX_OSPATH. At least on my Linux laptop MAX_QPATH is
too short.
This commit is still untested on Windows!
A new linked list fs_rawPath with nodes of type fsRawPath_t is added.
The new function FS_BuildRawPath() fills it at filesystem initialization
with the raw search path directories. Later FS_BuildGenericSearchPath()
and FS_BuildGameSpecificSearchPath() use it to derive the actual search
directories.
Remove all functions that are no longer used:
* FS_AddGameDirectory()
* FS_SetGameDir()
* FS_AddHomeAsGameDirectory()
* FS_AddBinaryDirAsGameDirectory()
While at it try remove as much global variables from filesystem.c as
possible. Also fix a small, longstandig bug: The download code should
treat .zip and .pk3 files as pak files and not as normal directories.
Refactor FS_SetGamedir() into FS_BuildGameSpecificSearchPath(). The new
function removes the specialized part of the search path if necessary
and create a new specialized part based upon the given directory. It
uses the FS_AddDirToSearchpath() function added in the last commit.
This moves the code used to add a directory and it's paks to the search
path into one well defined function FS_AddDirToSearchPath(). Also create
a new function FS_BuildGenericSearchPath() that builds the generic part
of the global search path. This obsoletes several other, specialized
functions. They'll be removed in a later commit.
This prevents Windows from scaling our (fullscreen) window to crap if
the whole desktop is scaled and we're rendering more than 1080p. This is
believed to fix#208.
if that cvar is set to 1, particles aren't rendered as nice circles, but
as squares, like in the software renderer or in Quake1.
Also documented it in cvarlist.md and fixed some typos there
The model shadows are rendered after all entities are rendered.
This fixes them making entity brushes below them translucent (#194)
The model rendering code used lots of global variables, many of them
totally superfluous (esp. currententity, currentmodel).
I refactored the code to use less global variables (this was at least
partly needed to render the shadows later).
So this looks like lots of changes, but many of them are just using
"entity" instead of "currententity" or "model" instead of "currentmodel"
Like GL1 gl_shadows + gl_stencilshadows: no shadow volumes, but looks
ok apart from standing over edges
The gl_stencilshadows cvar isn't used in GL3, it always uses the stencil
buffer if available (and if gl_shadows != 0)
This still needs performance optimizations: Like the GL1 impl it takes
lots of draw calls per model, it could be done with one per model like
when rendering the actual model.