We're going to remove support for SDL 1.2 shortly after the next
release. Give the last remaining users a very clear warning about
it, error out at force them to edit the code.
this happens when you just copypaste and adapt r_lefthand
also did some minor changes to R_AliasDrawModel in the soft renderer
to make sure alias[xy]scale is reset properly in the early out cases
At high 'fov' values the weapon looked quite distorted.
Now it's rendered with an independent FOV, which looks better.
Note that the 'fov' cvar sets fov_x, while this is based on fov_y
(which is calculated from fov_x), so it's indeed different values:
r_gunfov seems to correspond to fov 90.
We use r_gunfov 80 as default, because it looks better.
The old code was working only when the client was connected to a local
server. The 'newgame' executed by the menu expands to a 'map', loading
a map ends in SV_InitGame() which calls CL_Drop() on the local client.
That calls CL_Disconnect() and everything is okay.
When the client is already connected to a remote server and no local
server is running the 'map' command spawns a new local server. This
new server thinks "Hey, I'm a new local server and no one is connected
to me. Let's pull the client in!". So it pull the already connected
client onto a new server without disconnecting, smashing it's state.
And everything goes down in flames.
The correct way would be to execute a 'disconnect' right before the
'newgame'. But the 'disconnect' cmd calls CL_Disconnect_f that throws
an ERR_DROP. ERR_DROP is implememted through a longjump(), jumping
around puts the process internal state in ashes... So bite the bullet
and add another hack: Call CL_Disconnect() before executing 'newgame'.
The 'game' command was more or less functional after the last commit.
We just need to reset the initialGame (renamed to userGivenGame) so we
don't revert back to the old game at server disconnect.
When connecting to a multiplayer game that runs a different mod
("game" cvar) than you are, it didn't load the corresponging configs
from the mod, but saved your changes to the config to the mod's config.
Which is doubly useless.
Now when the "game" cvar is changed, the configs are reloaded (from
the right directories for the mod), and when disconnecting the configs
are written, so the changes you did for a mod while playing MP are saved
before game is reset to the game you started with.
Until now we did an easy calculation to determine the frame timing:
1000000 microseconds (== 1 second) / targetframerate == delay between
frames. This works if the CPU and GPU are fast enough since the time
process to process the frame is negligible. But if one of them is too
slow or the GPU driver takes too long (see issue #277 for an example)
we render too few frames.
Work around this by calculating the average time used to process the
last 60 render oder packet frames and take that into account when
determining the delay between the frames. With this change even my
rotten AMD Radeon and it's broken Windows GL driver is able to hold
the displays famerate (enforced by vsync) just fine.
While here add a 5% security margin to our target packet frame rate if
the vsync is enabled. Just to be sure that we never process more render
than packet frames.
Modern LCD displays often haven't itegral refresh rates like 60hz but
fractional ones like 59.95hz. SDL communicates the refresh rate as
integer. On X11 the rate is rounded up or down with round(), but on
Windows it's (at least on my system with an AMD Radeon) truncated...
So on an 59.95hz display it's just 59hz, Quake II renders 0.95 frames
too few and the user sees microstutters.
And return the actual / requested display frame rate increased by one
to work around inaccuracies in Quake IIs internal timing. It should be
a problem if we're running a little bit too fast.
This is belived to fix at least a part of issue #277.
Refreshrate 2
The problem was that the cvars were only initialized (with CVar_Get())
if you opened the address book menu.
So if you start (and possibly run) and quit the game /without/ opening
that menu (or at least the "join network server" menu), the game will
not save those cvars to the config when it next writes it.
To prevent this, *always* initialize the cvars in M_Init().
There were two ways to implement this. One was to go with the stuff
already included in minizip and one to implement our own wrapper around
fopen(). This is the second options since @DanielGibson convinced me
that it would be safer.
We can't rely on the game.dll being unicode conformant. Work around
that by changing the current working directory before calling into
the game.dll, pass a non unicode string to it and chang back after
we return.
To be able to pass UTF-8 encoded pathes through cvars both the cvar
subsystem and the command parser would need a fair amount of UTF-8
understanding. And I'm not the poor soul that's going to implement
that. Therefor pass the datadir trough a global variable.
This is done in shared.c so that's available for both the client /
server / renderer and the game. A work around for older game DLL will
be added at a later time.
On Unix platforms unicode is implemented through UTF-8 which is
transparent for applications. But on Windows a UTF-16 dialect is
used which needs alteration at application side. This wrapper is
another step to unicode support on Windows, now we can replace
fopen() by a function that converts our internal UTF-8 pathes to
Windows UTF-16 dialect.
This is a noop for Unix platforms. The Windows build is broken,
the compiler errors out in shared.h. This will be fixed in a
later commit.
Caveats:
* fopen() calls in 3rd party code (std_* and unzip) are not replaced.
This may become a problem. We need to check that.
* In the Unix specific code fopen() isn't replaced since it's not
necessayry.
With this commit YQ2 is able to start and run on ReFS volumes. :) At
least as long as neither the binary path, the game data path nor the
path to the users home directory contain anything but ASCII characters.
Please note: This make break some corner cases with hore directories
containting unicode characters. They worked until now by pure luck.
A better solution providing full unicode support will be committed
in the next few days.
This brings at least two big advantages:
* No more 8.3 filename fuckups. Until know base0.pak and base0.pak_bak
was the same file for Quake II because only the first 3 characters of
the file extension were taken into account.
* Search pathes can contain any Unicode character.
There's no need to exclude directories from search by flags. In fact
the Unix backend has worked nicely for years without it... Sadly we
can't remove the now superfluous 'canhave' and 'musthave' attributes
from Sys_FindFirst() and Sys_FindNext() since they're defined in
shared.h and may be used from custom game DLLs.
* Remove a bunch of unnecessary functions.
* Reorder functions into logical groups. The orderig is now the same
on Unix and Windows.
While at it add several TODOs to the code. There's not need for special
library loading functions for the game, the Windows backend still uses
a lot of old and fishy DOS functions, etc. All this will be done at a
later time.
There's no need to duplicate machine independent parts of the client
initialization and the main loop for every platform.
While at it remove the nearly empty unix.h header and move Windows
main() into an own file. Not both platform have the same basic layout.
While building the wrapper as a console application is completely fine
there're some advantages by creating a "real" Windows GUI Application:
* Console applications always spawn an annoying console window.
* Windows GUI applications seem to have a much lower chance to trigger
my new best friend, the Windows Defender. As a console application
quake.exe triggered every time I started it, as Windows GUI
application not only once.
Use WinMain() instead of wWinMain() because MinGW doesn't know about
the later and it doesn't matter anyways.
libSDLmain.a has to be linked and must run anyways. So there's no need
for us to reinvent the wheel, just rely on SDLs process setup, argument
parsing, message handling and so on. As a nice side effect this may fix
some strange bugs related to message handling and argument parsing...
My modifications (jpeg writing and supplying zlib compressor for better
PNG compression) have been merged upstream, so from now on updates
should be easy and painless.
(Sean renamed my stbi_png_level to stbi_write_png_compression_level)
Until now we had 3 modes:
0 -> never grab the mouse.
1 -> always grab the mouse
2 -> ungrab the mouse if the game is windowed and the console or the
menu is opened or a cinematic is playing.
The 3rd mode is the same as the 2nd one, but without the "game is
windowed" constrained. Please note that release the mouse grab in
fullscreen may have side effects like the game loosing focus and being
unable to regain it. Especially under X11.
This was requested by @prg318 in issue #271.
Loop 'for ( i = 0; i < 3; i++ )' sets values to vtx[0..2]. So next index must be 3(instead 4) and
loop 'for ( i = 16; i >= 0; i-- )' will set vtx[3..(18*3-1)].
=====
src/client/refresh/gl/r_light.c: In function ‘R_RenderDlight’:
src/client/refresh/gl/r_light.c:76:21: warning: iteration 16 invokes undefined behavior [-Waggressive-loop-optimizations]
vtx[index_vtx++] = light->origin [ j ] + vright [ j ] * cos( a ) * rad
~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ vup [ j ] * sin( a ) * rad;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/client/refresh/gl/r_light.c:65:2: note: within this loop
for ( i = 16; i >= 0; i-- )
^~~
=====
Apparently something (possibly nvidia's driver) on some windows
installations has some stupid application profile for quake2.exe that
breaks mouse input if the console has been opened..
Our workaround is to rename quake2.exe to yquake2.exe and provide a wrapper
quake2.exe that just calls the real one for backwards compatibility.
This is the source of that wrapper.
This is mostly the same approach as in GL1. I'm not quite sure if the
software rasterizer can work with all aspects and the like but I wasn't
able to crash it by trying several random resultions.
With this renamed cvars can be rewritten when config.cfg is first
loaded. Please note that once this was done older YQ2 versions can't
parse that config.cfg anymore.
gl_maxfps > 1000 breaks things, and cl_maxfps starts to behave weird
at >90, and while up to 125 or so you get the bugfeature of higher
jumping, beyond that things just get even buggier, at some point causing
bugs like #261
If too many of these sounds are started in one frame (for example if the
player shoots with the super shotgun into the power screen of a Brain)
things get too loud and OpenAL is forced to scale the volume of several
other sounds and the background music down. That leads to a noticable
and annoying drop in the overall volume.
Work around that by limiting the number of sounds started. 16 was
choosen by empirical testing.
This was so broken... Casting the type of an array to silence a
warning... It worked on x86, of course. But gave a SIGBUS on ARM.
Do it right, cast / copy the content of the array into another
array of the correct type. Yeah.
This fixes issue #231.
There're two possible problems with the calculation of the number of
sound buffers for Vorbis if OpenAL is in use:
* We assume that the (more or less) maximum number buffers is allocated
during map load. This is not correct if in a multiplayer game a lot of
custom models with custom sound connect at a later time.
* 64 buffers (about 3 seconds worth of music) may be too low in some
situations.
Work around this by recalculating the number of buffers if necessary.
We're now reserving about 256 (== 12 seconds) buffers.
This may fix issue #252.
turns out clock_get_time() uses mach_timespec_t which is very similar
to POSIX timespec_t so we're back to just one Sys_Microseconds() function
with an #ifdef __APPLE__ for the (relatively small) differences
Older versions of OS X don't implement clock_gettime() and no(?) version
seems to implement CLOCK_MONOTONIC. Work around this by implementing an
OS X specific variant of Sys_Microseconds() that relies on Mach APIs
provided by all OS X versions...
While at it alter the generic variant so that CLOCK_MONOTONIC is used
only if it's available. CLOCK_REALTIME as a fallback should be good
enough in most cases.
This is believed to fix issue #239.
We need to take in account that scaling the characters makes them
bigger, thus they need need to be places depending on the scale and not
at a precaclulated position. This should fix issue #247.
Returning 'microseconds / 1000ll' at the first call is wrong, the game
would thing that the first frame too way too much time. For some reason
this wirks in (my) Win10, but breaks on (my) Win7...
The original client used single precision mode on Windows and the
default mode on all other platforms. Most platform (at least OS X,
FreeBSD, NetBSD up to 6.0, OpenBSD and Solaris) set double precision
as default, Linux sets extended double precision... When playing a
network game there're several possibilities:
* Same precision on both sides: This one is okay, of course.
* single precision <-> double precision: This one is okay, too. I guess
this is because the code allows a small deviation between client and
server to work around imprecisions introduced be the network protocol.
* double precision <-> extended double precision: This one is okay,
likely for the same reasons given above.
* single precision <-> extended double precision: This one gives a lot
of misspredictions at client side.
All of these are more or less academic these days. Yamagi Quake II used
the platforms default mode for ages. And both gcc and clang default to
SSE2 math (with double precision as default on all platforms) when
compiling for amd64. So the only reasonable case is Linux/i386 on one
side and the original client or another source port on Windows/i386 at
the other side.
Work around this by forcing the x87 to double precision mode.
Miscframes are coupled to renderframes and are just checking for
renderer changes (very cheap) and advancing CD audio if implemented.
There's no reason not to that at every frame.
Until now the curtime variable was set at every call Sys_*seconds().
That's a little bit unfortunate because calls to that functions are
scattered around the code. Instead set it once every frame in
Qcommon_Frame().
The dedicated server runs at cl_maxfps frames per second. Een with very
large values one server frame can never be shorter than 1 milliseconds.
And the timing doesn't need to be very precise since the network
latency adds a lot of more jitter.
Yes, this duplicates some code. But it's at least 100 times more
readable to have two distinct functions for distinct purposes instead
of about 25 #ifdef.
This shouldn't have any noteable impact on timing (besides the machine
is way too slow for Quake II) and saves a lot of CPU cycles. 100% load
vs. 17% load on my desktop.
Having the server in an own timing zone seems to simplify things but
introduces slight timing discrepancies. The most visible effect is that
the game runs a little bit too fast, especially in the first cl_maxfps
frames.
Therefor: Remove timeframes, they're unnecessary. Track the time since
the last (client|server) frame instead and pass it to the client and
server when it's called.
This allows us to implement the global timing without an artificial
brake slowing the game unnecessary down. This is only partial working,
more changes and fixes are coming.
This is a no-op for now. We need this to get a much higher precision
when calculating the frame times. This changes the fixedtime cvar from
milli- to microseconds.
This is the same as the client does for it's realtime. It looks at least
somewhat more correct since it pevents rounding errors. And things are
simplified a litte bit since the server timing is now independent of the
global timing.
The old framecounter had two problems:
* It measured only the time of the current render frame, not the total
time spend between the last and the current render frame. Therefor the
calculated value was too high.
* It was based upon milliseconds and rather inaccurate.
This new frame counter solves both problems. The total time spend
between two render frames is measured and the measurement done in
microseconds.
There're three modes:
* cl_drawfps 1 displayes the average frame rate calculated over the last
60 frames.
* cl_drawfps 2 displays a nice string with minimal framerate, maximum
framerate and average framerate. All three values are calculated over
the last 60 frames.
* cl_drawfps 3 is the same as number 2 but with a second line showing the
raw values.
TODO:
* Discuss if cl_drawfps should be renamed to cl_showfps. All other
status displays are named cl_show*.
While at it remove several unsused drawing functions.
This is the same as the well known Sys_Milliseconds() but like the name
suggests with microsecond precision. To be used in the upcoming new
framecounter.
For some fucking reason, if you set an unsupported
SDL_GL_MULTISAMPLESAMPLES value on Windows (at least Win10 with Intel GPU
drivers, there 16 is unsupported), creating the Window and OpenGL context
will succeed, but you'll get Microsofts stupid GDI OpenGL software
implementation that only supports OpenGL 1.1.
Before these fixes, the GL3 renderer would just crash and the GL1 renderer
would fail to load, which caused the game to run in the background:
No Window, no Input, but sound was playing..
Now this problem should be handled properly and if initialization fails,
the rendering backend will be considered not working, and it will
try the gl1 backend next, and if that also fails it'll give up and exit
the game.
Until now the video menu enforced:
* fov set to 90 and horplus set to 1
* fov set to something other than 90 and horplus to 0
If the user hat configured another configuration through the console the
menu would reset it, even if only unrelated changes are applied. With
this change horplus is ignored by the menu and only fov is altered. The
rationale behind this is that most users want horplus enabled and all
others can disable it through the console.
This is believed to fix issue #225.
While here reimplement the same hack for baseq2/players, lost somewhere
on the way. This is just another searchpath f*ckup. For some reasons
paks have a higher priority than plain directories. We do not want that
for the maps.lst and players/ since id Software decided to put updated
versions of them directly into baseq2/...
This closes issue #217.
SDL_WINDOW_FULLSCREEN changes the display resolution if the requested
resolution is different to the actual resultion. SDL_WINDOW_FULLSCREEN_
DESKTOP doesn't do that, it places a smaller or bigger render area
somewhere inside the fullscreen area. This is somewhat nicer with modern
high resolution flatscreens.
This commit changes vid_fullscreen 1 from SDL_WINDOW_FULLSCREEN to
SDL_WINDOW_FULLSCREEN_DESKTOP. Additional vid_fullscreen 2 is
implemented, it uses SDL_WINDOW_FULLSCREEN to create the fullscreen
area.
TL;DR: Use vid_fullscreen 1 to keep the current resolution or use
vid_fullscreen 2 to switch the resolution.
Implementation details: The whole fullscreen stuff is a horrible mess.
Like generations of hackers before me I'm not desperated enough to clean
it up. GLimp_InitGraphics() is modified to take the fullscreen mode as
an integer and not as a boolean. That's a change to the renderer API.
In GLimp_InitGraphics() the needed SDL fullscreen mode flag is
determined once at the top and just used further down below. That saves
dome SDL1 <-> SDL2 compatibility cruft. IsFullscreen() was modified to
return the actual fullscreen mode and not just if fullscreen is enabled.
Several platforms - OpenBSD being a prominent example - don't provide a
way to get the executable path. Don't abort, just return the current
dir ./ executable dir. This is just a work around, of course. The user
needs to supply a script that calls ./quake2 in the correct directory.
The big problem with the old implementation was that stdout.txt and
stderr.txt on Windows became available when nearly all the low level
initialization was already done. Regardless if the client was in
normal or in portable mode.
Solve this by scanning the command line for the string '-portable'. If
it's not found, stdout and stderr are redirected as early as possible.
If found the global variable (*sigh*) is_portable is set to true. It's
evaluated later on to set the cvar 'portable', which in turn is used
be the filesystem to decide if the home directory should be added to
the search path.
Maybe we should remove the cvar and stick to the global variable.
While at it change the maximum path length for qconsole.log from
MAX_QPATH to MAX_OSPATH. At least on my Linux laptop MAX_QPATH is
too short.
This commit is still untested on Windows!
A new linked list fs_rawPath with nodes of type fsRawPath_t is added.
The new function FS_BuildRawPath() fills it at filesystem initialization
with the raw search path directories. Later FS_BuildGenericSearchPath()
and FS_BuildGameSpecificSearchPath() use it to derive the actual search
directories.
Remove all functions that are no longer used:
* FS_AddGameDirectory()
* FS_SetGameDir()
* FS_AddHomeAsGameDirectory()
* FS_AddBinaryDirAsGameDirectory()
While at it try remove as much global variables from filesystem.c as
possible. Also fix a small, longstandig bug: The download code should
treat .zip and .pk3 files as pak files and not as normal directories.
Refactor FS_SetGamedir() into FS_BuildGameSpecificSearchPath(). The new
function removes the specialized part of the search path if necessary
and create a new specialized part based upon the given directory. It
uses the FS_AddDirToSearchpath() function added in the last commit.
This moves the code used to add a directory and it's paks to the search
path into one well defined function FS_AddDirToSearchPath(). Also create
a new function FS_BuildGenericSearchPath() that builds the generic part
of the global search path. This obsoletes several other, specialized
functions. They'll be removed in a later commit.
This prevents Windows from scaling our (fullscreen) window to crap if
the whole desktop is scaled and we're rendering more than 1080p. This is
believed to fix#208.
if that cvar is set to 1, particles aren't rendered as nice circles, but
as squares, like in the software renderer or in Quake1.
Also documented it in cvarlist.md and fixed some typos there
The model shadows are rendered after all entities are rendered.
This fixes them making entity brushes below them translucent (#194)
The model rendering code used lots of global variables, many of them
totally superfluous (esp. currententity, currentmodel).
I refactored the code to use less global variables (this was at least
partly needed to render the shadows later).
So this looks like lots of changes, but many of them are just using
"entity" instead of "currententity" or "model" instead of "currentmodel"
Like GL1 gl_shadows + gl_stencilshadows: no shadow volumes, but looks
ok apart from standing over edges
The gl_stencilshadows cvar isn't used in GL3, it always uses the stencil
buffer if available (and if gl_shadows != 0)
This still needs performance optimizations: Like the GL1 impl it takes
lots of draw calls per model, it could be done with one per model like
when rendering the actual model.
there have been complaints that those things look too bright, so let
people configure their intensity independently of the general intensity
used for levels, monsters etc.
fixes#189
When applied to y SURF_FLOWING textures are scrolled into the wrong
direction. I guess that in GL1 the offset is also applied to x.
This fixes issue #186.
Sometimes cinematics are skipped after the first frame even if the
player didn't press any key. I'm unable to reliable reproduce that,
so my educated guess is that one or more events are still waiting in
SDLs event queue.
For example, during intermission IN_Update() is not called for 5
seconds, key presses by impatient players are just added to the queue
and not processed. The first event is used to skip leave the
intermission, the second event skips the cinematic...
Fix this by implementing a new function IN_FlushQueue() to flush SDLs
event queue and calling it when starting cinematic playback. Yes, this
is just another layer violation. :(
When the client is paused (either explicit or by entering the menu or
console) the cinematic is paused, too. Therefor no more sound samples
are generated and added to the playback queue, the existing samples are
played over and over again. Until now these samples weren't hearable,
because OpenAL marked them as processed and AL_StreamUpdate() removed
them from OpenALs playback queues. This changed in the previous commit,
now the stay in OpenALs queue and are hearable.
Fix this by calling AL_UnqueueRawSamples() when the menu or console is
entered during cinematic playback.