SDL_WINDOW_FULLSCREEN changes the display resolution if the requested
resolution is different to the actual resultion. SDL_WINDOW_FULLSCREEN_
DESKTOP doesn't do that, it places a smaller or bigger render area
somewhere inside the fullscreen area. This is somewhat nicer with modern
high resolution flatscreens.
This commit changes vid_fullscreen 1 from SDL_WINDOW_FULLSCREEN to
SDL_WINDOW_FULLSCREEN_DESKTOP. Additional vid_fullscreen 2 is
implemented, it uses SDL_WINDOW_FULLSCREEN to create the fullscreen
area.
TL;DR: Use vid_fullscreen 1 to keep the current resolution or use
vid_fullscreen 2 to switch the resolution.
Implementation details: The whole fullscreen stuff is a horrible mess.
Like generations of hackers before me I'm not desperated enough to clean
it up. GLimp_InitGraphics() is modified to take the fullscreen mode as
an integer and not as a boolean. That's a change to the renderer API.
In GLimp_InitGraphics() the needed SDL fullscreen mode flag is
determined once at the top and just used further down below. That saves
dome SDL1 <-> SDL2 compatibility cruft. IsFullscreen() was modified to
return the actual fullscreen mode and not just if fullscreen is enabled.
Several platforms - OpenBSD being a prominent example - don't provide a
way to get the executable path. Don't abort, just return the current
dir ./ executable dir. This is just a work around, of course. The user
needs to supply a script that calls ./quake2 in the correct directory.
The big problem with the old implementation was that stdout.txt and
stderr.txt on Windows became available when nearly all the low level
initialization was already done. Regardless if the client was in
normal or in portable mode.
Solve this by scanning the command line for the string '-portable'. If
it's not found, stdout and stderr are redirected as early as possible.
If found the global variable (*sigh*) is_portable is set to true. It's
evaluated later on to set the cvar 'portable', which in turn is used
be the filesystem to decide if the home directory should be added to
the search path.
Maybe we should remove the cvar and stick to the global variable.
While at it change the maximum path length for qconsole.log from
MAX_QPATH to MAX_OSPATH. At least on my Linux laptop MAX_QPATH is
too short.
This commit is still untested on Windows!
This prevents Windows from scaling our (fullscreen) window to crap if
the whole desktop is scaled and we're rendering more than 1080p. This is
believed to fix#208.
Sometimes cinematics are skipped after the first frame even if the
player didn't press any key. I'm unable to reliable reproduce that,
so my educated guess is that one or more events are still waiting in
SDLs event queue.
For example, during intermission IN_Update() is not called for 5
seconds, key presses by impatient players are just added to the queue
and not processed. The first event is used to skip leave the
intermission, the second event skips the cinematic...
Fix this by implementing a new function IN_FlushQueue() to flush SDLs
event queue and calling it when starting cinematic playback. Yes, this
is just another layer violation. :(
For some reasons setting the MSAA fails at window creation and not at
GL context creation. And of course SDL is unable to detect before, that
the requested number of MSAA samples is invalid... Implement a work
around: Fall back to gl_msaa_samples == 0 if the window cannot be
created.
Resurrect support for render / refresher loadable libraries and use them to implement an experimental OpenGL 3.2 renderer. Please note that the new renderer interface is somewhat different from the original one, old render libraries will NOT work!
- Bump vid_gamma to 1.2 in both GL1 and GL3. A default value of 1.0 is
too dark.
- Lower gl3_overbrightbits to 1.3, the previous value of 1.5 was too
bright. This can be seen in later units, for example on mine1 some
textures blended into white.
- Lower gl3_particle_size to 40. A value of 60 may be okay, but with
gl3_particle_fade_factor 1.2 the particles take up too much screen
estate in close range combat.
With this changes GL3 looks (at least for me) nearly the same as GL1
rendered through the removed multitexturing path.
To be able to test if the game is running portable all checks of the
portable cvar must be done after Cvar_Init(). Instead of redirecting
stdout and stderr as early as possible, delay the redirection right
after Cvar_Init(). After this change the printf() in WinMain() aren't
printed into stdout.txt, but I guess that it isn't a big problem. All
interessting stuff like the search pathes is still there.
Rename fs_portable to portable. It's no longer filesystem specific.
Some people complained about the usage of non busy waits:
* I was told that there's an input lag with nanosleep(). I still doubt
that, but since the problem is easy to solve...
* Some Intel CPUs throttle the GPU if the selected CPU pstate is too
low. This is especially a problem on Haswell mobile CPUs. Keeping
a core busy works around that.
the screenshot command now supports the filetype as optional argument
(just "screenshot" will use tga like before):
"screenshot png" will save the screenshot as PNG, same with jpg, png
and tga.
For jpg, you can even specify the quality, like "screenshot jpg 90"
(the Quality is between 1 and 100, like with libjpeg).
To reduce duplicated code, I addeed Vid_WriteScreenshot() to refimport_t
and implement most of it in the client (vid.c).
The renderer still fetches the raw image data from OpenGL or whatever
and then calls re.VidWriteScreenshot() which will write it to disk in
the format requested by the user.
the arguments were not used anyway, and returning true/false is clearer
than returning -1 (for error) or sth else (which has no deeper meaning
anyway).
Also:
* PrepareForWindow() can now return -1 if there's an error
* suppress some warnings in Makefile
* fix error for building ref_gl.dylib on OSX
So in all code in the reflib (ref_gl.dll/.so/.dylib) calls to
ri.Con_Printf(print_level, fmt, ...) have been replaced by calls to
R_Printf(print_level, fmt, ...) which uses ri.Com_VPrintf().
Otherwise at least one key may be still marked as down causing an
immediate abort of playback. While here be a little bit paranoid
and clean up key states when focus is gained. In theory that's a
no-op.
The old implementation had two problems:
* OSTYPE and ARCH are systemwide defines, overriding them may break
the global libc headers. This is a theoretical problem, I've never
seen it in praxis.
* Not all system set ARCH correctly when building in a chroot env.
For example on Linux ARCH is set to x86_64 when building in an
i386 chroot. Now the user can do something like "make YQ2ARCH=i386"
to get things right.
With vsync enabled the render times of consecutive frames can diverge.
The first frame arrives right at the next display frame and is rendered
without waiting time. The next frame has to wait 20ms. The leads to some
problems with the move prediction if the client is asynchronous. Fix
this by capping the desired frame rate at the display refresh rate. Also
make sure that the network framerate is never higher then the renderer
framerate.
With the commit the timing is always correct:
* With no limit as much frames as possible are rendered. In this case
rfps > nfps and everything's good.
* With vsync enabled rfps > nfps or rfps == nfps is given. Also rfps
will never exceed the display refresh rate.
* On slow hardware either rfps > nfps or an implicit rfpc == nfps is
given.
This is a slighty revised version of id Software original code. Icculus
code may have some advantages on broken drivers or underpowered GPUs.
Today it's just a performance hook. This is a first step in fixing #147.
This is more than enough for everyone and prevents wasting CPU time.
Without this change as many client frames as possible are rendered,
Quake II uses a complete core.