* Normaly SDL chooses a sane refresh rate for fullscreen windows. Users
may want to override that, so provide a new cvar `vid_rate`. If it's
set to a value greater than 0, we're trying to get a mode close to the
requested resolution and refresh rate and switch to that.
* A bug in SDL may leave us in the wrong mode, detect that condition and
abort. See https://bugzilla.libsdl.org/show_bug.cgi?id=4700 for details.
This is part of issue #302.
Print a list of all available modes as soon as SDLs video backend
initializes and the real display mode after the window was created
or altered.
This hopefully helps debbuging problem with display mode selection, see
issue #302 for an example.
-Add back use of last_position_x and last_position_y
-last_position_x and last_position_y will be set to undefined when the window is shutdown IF the current display used is not the desired display
-last_display will be set to desired display at window shutdown if not the same
-vid_displayindex clamped using ClampDisplayIndexCvar() at startup and window shutdown
-We only need to init the display indices once in GLimp_Init
-We only need to clear the display indices once in GLimp_Shutdown
-Remove extra 'displayindex' variable
-SDL_GetNumVideoDisplays() will always remain the same after the call to SDL_Init(SDL_INIT_VIDEO), so it makes sense to set in GLimp_Init where we do this.
Otherwise the entities origin might be in the surface, which causes it
to be rendered in full black. This fix is a port from KMQuake2, reported
by @m-x-d. Closes#407.
There's an "enable alt joy keys" command now. If a key is bound to that
command, all joystick buttons (incl. hat and triggers) are turned from
K_JOYx into K_JOYx_ALT, which allows two keybindings on the same key,
one with the altselector pressed and one without.
If there's no keybinding for K_JOYx_ALT, it will use the binding for
just K_JOYx as a fallback (if it exists).
This is especially handy to create direct bindings for all the weapons
on the (limited amount of) Joystick buttons.
Seems like AMDs Windows driver doesn't like it when we call
glBufferData() *a lot* (other drivers, incl. Intels, don't seem to
care as much).
Even on an i7-4771 with a Radeon RX 580 I couldn't get stable 60fps
on Windows without this workaround (the open source Linux driver is ok).
This workaround can be enabled/disabled with the gl3_usebigvbo cvar;
by default it's -1 which means "enable if AMD driver is detected".
Enabling it when using a nvidia GPU with their proprietary drivers
reduces the performance to 1/3 of the fps we get without it, so it
indeed needs to be conditional...
use GL3_BufferAndDraw3D() instead of glBufferData() and glDrawArrays()
in each place it's needed.
This by itself doesn't make anything faster, but it will make trying out
different ways to upload data easier.
The developers tested their maps without the fix and decided that it
looked good. Add a new cvar gl_fixsurfsky defaulting to 0 that enables
the fix if someone really want it.
The software renderer already did this, but not the GL renderers. Maybe
the logic was lost somewhere on the long way... Without this change a
fullbright lightmap is generated for SURF_SKY surfaces and without the
SURF_DRAWSKY flags the surfaces aren't skipped in RecursiveLightPoint()
and GL3_LM_CreateSurfaceLightmap(). This isn't a problem under real
skyboxes, but in cases were SURF_SKY is abused fpr interior lightning.
rmine2.bsp in rogue is a good place to see the problem
Reported by @m-x-d, fixes#393.
At least with MinGW on Windows vsnprintf() treats buffer < size as an
error, returning -1 instead of the number of characters that would have
been printed without size restrictions. Therefor msgLen may be wrong,
leading to all kind of funny mistakes further down below... Buffer
overflow included. Work around this by handling the msgLen < 0 case and
adding an explicit terminating \0.
This is another case of "I wonder why nobody has never noticed this",
the GL1 renderers extension string triggered the buffer overflow each
time the game started.