* Remove a bunch of unnecessary functions.
* Reorder functions into logical groups. The orderig is now the same
on Unix and Windows.
While at it add several TODOs to the code. There's not need for special
library loading functions for the game, the Windows backend still uses
a lot of old and fishy DOS functions, etc. All this will be done at a
later time.
There's no need to duplicate machine independent parts of the client
initialization and the main loop for every platform.
While at it remove the nearly empty unix.h header and move Windows
main() into an own file. Not both platform have the same basic layout.
While building the wrapper as a console application is completely fine
there're some advantages by creating a "real" Windows GUI Application:
* Console applications always spawn an annoying console window.
* Windows GUI applications seem to have a much lower chance to trigger
my new best friend, the Windows Defender. As a console application
quake.exe triggered every time I started it, as Windows GUI
application not only once.
Use WinMain() instead of wWinMain() because MinGW doesn't know about
the later and it doesn't matter anyways.
libSDLmain.a has to be linked and must run anyways. So there's no need
for us to reinvent the wheel, just rely on SDLs process setup, argument
parsing, message handling and so on. As a nice side effect this may fix
some strange bugs related to message handling and argument parsing...
My modifications (jpeg writing and supplying zlib compressor for better
PNG compression) have been merged upstream, so from now on updates
should be easy and painless.
(Sean renamed my stbi_png_level to stbi_write_png_compression_level)
Until now we had 3 modes:
0 -> never grab the mouse.
1 -> always grab the mouse
2 -> ungrab the mouse if the game is windowed and the console or the
menu is opened or a cinematic is playing.
The 3rd mode is the same as the 2nd one, but without the "game is
windowed" constrained. Please note that release the mouse grab in
fullscreen may have side effects like the game loosing focus and being
unable to regain it. Especially under X11.
This was requested by @prg318 in issue #271.
Loop 'for ( i = 0; i < 3; i++ )' sets values to vtx[0..2]. So next index must be 3(instead 4) and
loop 'for ( i = 16; i >= 0; i-- )' will set vtx[3..(18*3-1)].
=====
src/client/refresh/gl/r_light.c: In function ‘R_RenderDlight’:
src/client/refresh/gl/r_light.c:76:21: warning: iteration 16 invokes undefined behavior [-Waggressive-loop-optimizations]
vtx[index_vtx++] = light->origin [ j ] + vright [ j ] * cos( a ) * rad
~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ vup [ j ] * sin( a ) * rad;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/client/refresh/gl/r_light.c:65:2: note: within this loop
for ( i = 16; i >= 0; i-- )
^~~
=====
Apparently something (possibly nvidia's driver) on some windows
installations has some stupid application profile for quake2.exe that
breaks mouse input if the console has been opened..
Our workaround is to rename quake2.exe to yquake2.exe and provide a wrapper
quake2.exe that just calls the real one for backwards compatibility.
This is the source of that wrapper.
This is mostly the same approach as in GL1. I'm not quite sure if the
software rasterizer can work with all aspects and the like but I wasn't
able to crash it by trying several random resultions.
With this renamed cvars can be rewritten when config.cfg is first
loaded. Please note that once this was done older YQ2 versions can't
parse that config.cfg anymore.
gl_maxfps > 1000 breaks things, and cl_maxfps starts to behave weird
at >90, and while up to 125 or so you get the bugfeature of higher
jumping, beyond that things just get even buggier, at some point causing
bugs like #261
If too many of these sounds are started in one frame (for example if the
player shoots with the super shotgun into the power screen of a Brain)
things get too loud and OpenAL is forced to scale the volume of several
other sounds and the background music down. That leads to a noticable
and annoying drop in the overall volume.
Work around that by limiting the number of sounds started. 16 was
choosen by empirical testing.
This was so broken... Casting the type of an array to silence a
warning... It worked on x86, of course. But gave a SIGBUS on ARM.
Do it right, cast / copy the content of the array into another
array of the correct type. Yeah.
This fixes issue #231.
There're two possible problems with the calculation of the number of
sound buffers for Vorbis if OpenAL is in use:
* We assume that the (more or less) maximum number buffers is allocated
during map load. This is not correct if in a multiplayer game a lot of
custom models with custom sound connect at a later time.
* 64 buffers (about 3 seconds worth of music) may be too low in some
situations.
Work around this by recalculating the number of buffers if necessary.
We're now reserving about 256 (== 12 seconds) buffers.
This may fix issue #252.
turns out clock_get_time() uses mach_timespec_t which is very similar
to POSIX timespec_t so we're back to just one Sys_Microseconds() function
with an #ifdef __APPLE__ for the (relatively small) differences
Older versions of OS X don't implement clock_gettime() and no(?) version
seems to implement CLOCK_MONOTONIC. Work around this by implementing an
OS X specific variant of Sys_Microseconds() that relies on Mach APIs
provided by all OS X versions...
While at it alter the generic variant so that CLOCK_MONOTONIC is used
only if it's available. CLOCK_REALTIME as a fallback should be good
enough in most cases.
This is believed to fix issue #239.
We need to take in account that scaling the characters makes them
bigger, thus they need need to be places depending on the scale and not
at a precaclulated position. This should fix issue #247.
Returning 'microseconds / 1000ll' at the first call is wrong, the game
would thing that the first frame too way too much time. For some reason
this wirks in (my) Win10, but breaks on (my) Win7...
The original client used single precision mode on Windows and the
default mode on all other platforms. Most platform (at least OS X,
FreeBSD, NetBSD up to 6.0, OpenBSD and Solaris) set double precision
as default, Linux sets extended double precision... When playing a
network game there're several possibilities:
* Same precision on both sides: This one is okay, of course.
* single precision <-> double precision: This one is okay, too. I guess
this is because the code allows a small deviation between client and
server to work around imprecisions introduced be the network protocol.
* double precision <-> extended double precision: This one is okay,
likely for the same reasons given above.
* single precision <-> extended double precision: This one gives a lot
of misspredictions at client side.
All of these are more or less academic these days. Yamagi Quake II used
the platforms default mode for ages. And both gcc and clang default to
SSE2 math (with double precision as default on all platforms) when
compiling for amd64. So the only reasonable case is Linux/i386 on one
side and the original client or another source port on Windows/i386 at
the other side.
Work around this by forcing the x87 to double precision mode.
Miscframes are coupled to renderframes and are just checking for
renderer changes (very cheap) and advancing CD audio if implemented.
There's no reason not to that at every frame.
Until now the curtime variable was set at every call Sys_*seconds().
That's a little bit unfortunate because calls to that functions are
scattered around the code. Instead set it once every frame in
Qcommon_Frame().
The dedicated server runs at cl_maxfps frames per second. Een with very
large values one server frame can never be shorter than 1 milliseconds.
And the timing doesn't need to be very precise since the network
latency adds a lot of more jitter.
Yes, this duplicates some code. But it's at least 100 times more
readable to have two distinct functions for distinct purposes instead
of about 25 #ifdef.
This shouldn't have any noteable impact on timing (besides the machine
is way too slow for Quake II) and saves a lot of CPU cycles. 100% load
vs. 17% load on my desktop.
Having the server in an own timing zone seems to simplify things but
introduces slight timing discrepancies. The most visible effect is that
the game runs a little bit too fast, especially in the first cl_maxfps
frames.
Therefor: Remove timeframes, they're unnecessary. Track the time since
the last (client|server) frame instead and pass it to the client and
server when it's called.
This allows us to implement the global timing without an artificial
brake slowing the game unnecessary down. This is only partial working,
more changes and fixes are coming.
This is a no-op for now. We need this to get a much higher precision
when calculating the frame times. This changes the fixedtime cvar from
milli- to microseconds.
This is the same as the client does for it's realtime. It looks at least
somewhat more correct since it pevents rounding errors. And things are
simplified a litte bit since the server timing is now independent of the
global timing.
The old framecounter had two problems:
* It measured only the time of the current render frame, not the total
time spend between the last and the current render frame. Therefor the
calculated value was too high.
* It was based upon milliseconds and rather inaccurate.
This new frame counter solves both problems. The total time spend
between two render frames is measured and the measurement done in
microseconds.
There're three modes:
* cl_drawfps 1 displayes the average frame rate calculated over the last
60 frames.
* cl_drawfps 2 displays a nice string with minimal framerate, maximum
framerate and average framerate. All three values are calculated over
the last 60 frames.
* cl_drawfps 3 is the same as number 2 but with a second line showing the
raw values.
TODO:
* Discuss if cl_drawfps should be renamed to cl_showfps. All other
status displays are named cl_show*.
While at it remove several unsused drawing functions.
This is the same as the well known Sys_Milliseconds() but like the name
suggests with microsecond precision. To be used in the upcoming new
framecounter.
For some fucking reason, if you set an unsupported
SDL_GL_MULTISAMPLESAMPLES value on Windows (at least Win10 with Intel GPU
drivers, there 16 is unsupported), creating the Window and OpenGL context
will succeed, but you'll get Microsofts stupid GDI OpenGL software
implementation that only supports OpenGL 1.1.
Before these fixes, the GL3 renderer would just crash and the GL1 renderer
would fail to load, which caused the game to run in the background:
No Window, no Input, but sound was playing..
Now this problem should be handled properly and if initialization fails,
the rendering backend will be considered not working, and it will
try the gl1 backend next, and if that also fails it'll give up and exit
the game.
Until now the video menu enforced:
* fov set to 90 and horplus set to 1
* fov set to something other than 90 and horplus to 0
If the user hat configured another configuration through the console the
menu would reset it, even if only unrelated changes are applied. With
this change horplus is ignored by the menu and only fov is altered. The
rationale behind this is that most users want horplus enabled and all
others can disable it through the console.
This is believed to fix issue #225.
While here reimplement the same hack for baseq2/players, lost somewhere
on the way. This is just another searchpath f*ckup. For some reasons
paks have a higher priority than plain directories. We do not want that
for the maps.lst and players/ since id Software decided to put updated
versions of them directly into baseq2/...
This closes issue #217.
SDL_WINDOW_FULLSCREEN changes the display resolution if the requested
resolution is different to the actual resultion. SDL_WINDOW_FULLSCREEN_
DESKTOP doesn't do that, it places a smaller or bigger render area
somewhere inside the fullscreen area. This is somewhat nicer with modern
high resolution flatscreens.
This commit changes vid_fullscreen 1 from SDL_WINDOW_FULLSCREEN to
SDL_WINDOW_FULLSCREEN_DESKTOP. Additional vid_fullscreen 2 is
implemented, it uses SDL_WINDOW_FULLSCREEN to create the fullscreen
area.
TL;DR: Use vid_fullscreen 1 to keep the current resolution or use
vid_fullscreen 2 to switch the resolution.
Implementation details: The whole fullscreen stuff is a horrible mess.
Like generations of hackers before me I'm not desperated enough to clean
it up. GLimp_InitGraphics() is modified to take the fullscreen mode as
an integer and not as a boolean. That's a change to the renderer API.
In GLimp_InitGraphics() the needed SDL fullscreen mode flag is
determined once at the top and just used further down below. That saves
dome SDL1 <-> SDL2 compatibility cruft. IsFullscreen() was modified to
return the actual fullscreen mode and not just if fullscreen is enabled.
Several platforms - OpenBSD being a prominent example - don't provide a
way to get the executable path. Don't abort, just return the current
dir ./ executable dir. This is just a work around, of course. The user
needs to supply a script that calls ./quake2 in the correct directory.
The big problem with the old implementation was that stdout.txt and
stderr.txt on Windows became available when nearly all the low level
initialization was already done. Regardless if the client was in
normal or in portable mode.
Solve this by scanning the command line for the string '-portable'. If
it's not found, stdout and stderr are redirected as early as possible.
If found the global variable (*sigh*) is_portable is set to true. It's
evaluated later on to set the cvar 'portable', which in turn is used
be the filesystem to decide if the home directory should be added to
the search path.
Maybe we should remove the cvar and stick to the global variable.
While at it change the maximum path length for qconsole.log from
MAX_QPATH to MAX_OSPATH. At least on my Linux laptop MAX_QPATH is
too short.
This commit is still untested on Windows!
A new linked list fs_rawPath with nodes of type fsRawPath_t is added.
The new function FS_BuildRawPath() fills it at filesystem initialization
with the raw search path directories. Later FS_BuildGenericSearchPath()
and FS_BuildGameSpecificSearchPath() use it to derive the actual search
directories.