The input system backend was once used in the client and the renderers,
but for some years now it has been an integral part of the client only.
Move it there.
The last commits did some bigger changes to the interaction between the
GL renderers and the client. The code is now SDL 2.0 conformant, window
and context creation are strictly distinct operations. SDL is only
initialized when necessary. Since this broke the client <-> renderer
API, bump it's version.
There a lot of things left to do for dark and cold winter evenings:
* The software renderer implements it's own window handling and
reinitialized SDL whenever vid_restart is called. This is highly
problematic.
* vid_fullscreen is abused to communicate changes to renderer config
throughout the code. That's a very ugly, messy and potential very
problematic hack. But not easy to remove.
* Some funtion calls between the client and the renderer are
unnecessary.
The changes to the client <-> renderer interaction fixed issue #302.
In the old world we deinitialized and reinitialized SDL each time we
restarted or changed the renderer. That would clear the whole GL state.
In the new world we let SDL running and just recreate the windows. In
some cases parts of the old renderers state would leak into the new
renderer, leading to strange problems.
* Another round of general cleanup.
* Introduce gl3_libgl cvar to force a libGL.
* Fix stencil buffer tests.
* Further untangle window <-> context stuff.
The window is now fully at client side, the context at renderer side.
This is another break of the renderer API. And at least GL1 needs to
track this, it's broken for now.
* Even more syntax and code style fixes.
* Rename functions to match their actual purpose.
* Fix comments.
* SDL initialization and shutdown is now client side only. With
SDL 1.2 finally gone there's no need to involve the renderers
in it.
This breaks the client <-> renderer API. I haven't bumped the API
version with this commit because there're likely more changes when
I'm going through the renderer side of things. The VID backend also
needs a lot of love...
It might be a good idea to move this SDL backend files into the client
and rename them. We'll decide that at a later time.
* Some globals could be made static.
* Add comments were appropriate.
* And format the file to one coding stile. What is so hard with
keeping to one style?! MY IDE is even able to interfere the
style from existing code...
FreeBSD has supported printing backtraces for years. The API is the same
as on Linux, the only difference is that libexecinfo must be linked as a
seperate library. Since the last FreeBSD version with backtrace support
(FreeBSD 9.3) went out of support some time ago unconditionally enable
the printing.
We're going to remove support for SDL 1.2 shortly after the next
release. Give the last remaining users a very clear warning about
it, error out at force them to edit the code.
Modern LCD displays often haven't itegral refresh rates like 60hz but
fractional ones like 59.95hz. SDL communicates the refresh rate as
integer. On X11 the rate is rounded up or down with round(), but on
Windows it's (at least on my system with an AMD Radeon) truncated...
So on an 59.95hz display it's just 59hz, Quake II renders 0.95 frames
too few and the user sees microstutters.
And return the actual / requested display frame rate increased by one
to work around inaccuracies in Quake IIs internal timing. It should be
a problem if we're running a little bit too fast.
This is belived to fix at least a part of issue #277.
Refreshrate 2
We can't rely on the game.dll being unicode conformant. Work around
that by changing the current working directory before calling into
the game.dll, pass a non unicode string to it and chang back after
we return.
To be able to pass UTF-8 encoded pathes through cvars both the cvar
subsystem and the command parser would need a fair amount of UTF-8
understanding. And I'm not the poor soul that's going to implement
that. Therefor pass the datadir trough a global variable.
On Unix platforms unicode is implemented through UTF-8 which is
transparent for applications. But on Windows a UTF-16 dialect is
used which needs alteration at application side. This wrapper is
another step to unicode support on Windows, now we can replace
fopen() by a function that converts our internal UTF-8 pathes to
Windows UTF-16 dialect.
This is a noop for Unix platforms. The Windows build is broken,
the compiler errors out in shared.h. This will be fixed in a
later commit.
Caveats:
* fopen() calls in 3rd party code (std_* and unzip) are not replaced.
This may become a problem. We need to check that.
* In the Unix specific code fopen() isn't replaced since it's not
necessayry.
With this commit YQ2 is able to start and run on ReFS volumes. :) At
least as long as neither the binary path, the game data path nor the
path to the users home directory contain anything but ASCII characters.
Please note: This make break some corner cases with hore directories
containting unicode characters. They worked until now by pure luck.
A better solution providing full unicode support will be committed
in the next few days.
This brings at least two big advantages:
* No more 8.3 filename fuckups. Until know base0.pak and base0.pak_bak
was the same file for Quake II because only the first 3 characters of
the file extension were taken into account.
* Search pathes can contain any Unicode character.
There's no need to exclude directories from search by flags. In fact
the Unix backend has worked nicely for years without it... Sadly we
can't remove the now superfluous 'canhave' and 'musthave' attributes
from Sys_FindFirst() and Sys_FindNext() since they're defined in
shared.h and may be used from custom game DLLs.
* Remove a bunch of unnecessary functions.
* Reorder functions into logical groups. The orderig is now the same
on Unix and Windows.
While at it add several TODOs to the code. There's not need for special
library loading functions for the game, the Windows backend still uses
a lot of old and fishy DOS functions, etc. All this will be done at a
later time.
There's no need to duplicate machine independent parts of the client
initialization and the main loop for every platform.
While at it remove the nearly empty unix.h header and move Windows
main() into an own file. Not both platform have the same basic layout.
libSDLmain.a has to be linked and must run anyways. So there's no need
for us to reinvent the wheel, just rely on SDLs process setup, argument
parsing, message handling and so on. As a nice side effect this may fix
some strange bugs related to message handling and argument parsing...
My modifications (jpeg writing and supplying zlib compressor for better
PNG compression) have been merged upstream, so from now on updates
should be easy and painless.
(Sean renamed my stbi_png_level to stbi_write_png_compression_level)
Until now we had 3 modes:
0 -> never grab the mouse.
1 -> always grab the mouse
2 -> ungrab the mouse if the game is windowed and the console or the
menu is opened or a cinematic is playing.
The 3rd mode is the same as the 2nd one, but without the "game is
windowed" constrained. Please note that release the mouse grab in
fullscreen may have side effects like the game loosing focus and being
unable to regain it. Especially under X11.
This was requested by @prg318 in issue #271.
turns out clock_get_time() uses mach_timespec_t which is very similar
to POSIX timespec_t so we're back to just one Sys_Microseconds() function
with an #ifdef __APPLE__ for the (relatively small) differences
Older versions of OS X don't implement clock_gettime() and no(?) version
seems to implement CLOCK_MONOTONIC. Work around this by implementing an
OS X specific variant of Sys_Microseconds() that relies on Mach APIs
provided by all OS X versions...
While at it alter the generic variant so that CLOCK_MONOTONIC is used
only if it's available. CLOCK_REALTIME as a fallback should be good
enough in most cases.
This is believed to fix issue #239.
Returning 'microseconds / 1000ll' at the first call is wrong, the game
would thing that the first frame too way too much time. For some reason
this wirks in (my) Win10, but breaks on (my) Win7...
The original client used single precision mode on Windows and the
default mode on all other platforms. Most platform (at least OS X,
FreeBSD, NetBSD up to 6.0, OpenBSD and Solaris) set double precision
as default, Linux sets extended double precision... When playing a
network game there're several possibilities:
* Same precision on both sides: This one is okay, of course.
* single precision <-> double precision: This one is okay, too. I guess
this is because the code allows a small deviation between client and
server to work around imprecisions introduced be the network protocol.
* double precision <-> extended double precision: This one is okay,
likely for the same reasons given above.
* single precision <-> extended double precision: This one gives a lot
of misspredictions at client side.
All of these are more or less academic these days. Yamagi Quake II used
the platforms default mode for ages. And both gcc and clang default to
SSE2 math (with double precision as default on all platforms) when
compiling for amd64. So the only reasonable case is Linux/i386 on one
side and the original client or another source port on Windows/i386 at
the other side.
Work around this by forcing the x87 to double precision mode.
Until now the curtime variable was set at every call Sys_*seconds().
That's a little bit unfortunate because calls to that functions are
scattered around the code. Instead set it once every frame in
Qcommon_Frame().
The dedicated server runs at cl_maxfps frames per second. Een with very
large values one server frame can never be shorter than 1 milliseconds.
And the timing doesn't need to be very precise since the network
latency adds a lot of more jitter.
This shouldn't have any noteable impact on timing (besides the machine
is way too slow for Quake II) and saves a lot of CPU cycles. 100% load
vs. 17% load on my desktop.
This allows us to implement the global timing without an artificial
brake slowing the game unnecessary down. This is only partial working,
more changes and fixes are coming.
This is a no-op for now. We need this to get a much higher precision
when calculating the frame times. This changes the fixedtime cvar from
milli- to microseconds.
This is the same as the well known Sys_Milliseconds() but like the name
suggests with microsecond precision. To be used in the upcoming new
framecounter.
For some fucking reason, if you set an unsupported
SDL_GL_MULTISAMPLESAMPLES value on Windows (at least Win10 with Intel GPU
drivers, there 16 is unsupported), creating the Window and OpenGL context
will succeed, but you'll get Microsofts stupid GDI OpenGL software
implementation that only supports OpenGL 1.1.
Before these fixes, the GL3 renderer would just crash and the GL1 renderer
would fail to load, which caused the game to run in the background:
No Window, no Input, but sound was playing..
Now this problem should be handled properly and if initialization fails,
the rendering backend will be considered not working, and it will
try the gl1 backend next, and if that also fails it'll give up and exit
the game.
SDL_WINDOW_FULLSCREEN changes the display resolution if the requested
resolution is different to the actual resultion. SDL_WINDOW_FULLSCREEN_
DESKTOP doesn't do that, it places a smaller or bigger render area
somewhere inside the fullscreen area. This is somewhat nicer with modern
high resolution flatscreens.
This commit changes vid_fullscreen 1 from SDL_WINDOW_FULLSCREEN to
SDL_WINDOW_FULLSCREEN_DESKTOP. Additional vid_fullscreen 2 is
implemented, it uses SDL_WINDOW_FULLSCREEN to create the fullscreen
area.
TL;DR: Use vid_fullscreen 1 to keep the current resolution or use
vid_fullscreen 2 to switch the resolution.
Implementation details: The whole fullscreen stuff is a horrible mess.
Like generations of hackers before me I'm not desperated enough to clean
it up. GLimp_InitGraphics() is modified to take the fullscreen mode as
an integer and not as a boolean. That's a change to the renderer API.
In GLimp_InitGraphics() the needed SDL fullscreen mode flag is
determined once at the top and just used further down below. That saves
dome SDL1 <-> SDL2 compatibility cruft. IsFullscreen() was modified to
return the actual fullscreen mode and not just if fullscreen is enabled.
Several platforms - OpenBSD being a prominent example - don't provide a
way to get the executable path. Don't abort, just return the current
dir ./ executable dir. This is just a work around, of course. The user
needs to supply a script that calls ./quake2 in the correct directory.
The big problem with the old implementation was that stdout.txt and
stderr.txt on Windows became available when nearly all the low level
initialization was already done. Regardless if the client was in
normal or in portable mode.
Solve this by scanning the command line for the string '-portable'. If
it's not found, stdout and stderr are redirected as early as possible.
If found the global variable (*sigh*) is_portable is set to true. It's
evaluated later on to set the cvar 'portable', which in turn is used
be the filesystem to decide if the home directory should be added to
the search path.
Maybe we should remove the cvar and stick to the global variable.
While at it change the maximum path length for qconsole.log from
MAX_QPATH to MAX_OSPATH. At least on my Linux laptop MAX_QPATH is
too short.
This commit is still untested on Windows!
This prevents Windows from scaling our (fullscreen) window to crap if
the whole desktop is scaled and we're rendering more than 1080p. This is
believed to fix#208.
Sometimes cinematics are skipped after the first frame even if the
player didn't press any key. I'm unable to reliable reproduce that,
so my educated guess is that one or more events are still waiting in
SDLs event queue.
For example, during intermission IN_Update() is not called for 5
seconds, key presses by impatient players are just added to the queue
and not processed. The first event is used to skip leave the
intermission, the second event skips the cinematic...
Fix this by implementing a new function IN_FlushQueue() to flush SDLs
event queue and calling it when starting cinematic playback. Yes, this
is just another layer violation. :(
For some reasons setting the MSAA fails at window creation and not at
GL context creation. And of course SDL is unable to detect before, that
the requested number of MSAA samples is invalid... Implement a work
around: Fall back to gl_msaa_samples == 0 if the window cannot be
created.