Seems like AMDs Windows driver doesn't like it when we call
glBufferData() *a lot* (other drivers, incl. Intels, don't seem to
care as much).
Even on an i7-4771 with a Radeon RX 580 I couldn't get stable 60fps
on Windows without this workaround (the open source Linux driver is ok).
This workaround can be enabled/disabled with the gl3_usebigvbo cvar;
by default it's -1 which means "enable if AMD driver is detected".
Enabling it when using a nvidia GPU with their proprietary drivers
reduces the performance to 1/3 of the fps we get without it, so it
indeed needs to be conditional...
use GL3_BufferAndDraw3D() instead of glBufferData() and glDrawArrays()
in each place it's needed.
This by itself doesn't make anything faster, but it will make trying out
different ways to upload data easier.
The developers tested their maps without the fix and decided that it
looked good. Add a new cvar gl_fixsurfsky defaulting to 0 that enables
the fix if someone really want it.
The software renderer already did this, but not the GL renderers. Maybe
the logic was lost somewhere on the long way... Without this change a
fullbright lightmap is generated for SURF_SKY surfaces and without the
SURF_DRAWSKY flags the surfaces aren't skipped in RecursiveLightPoint()
and GL3_LM_CreateSurfaceLightmap(). This isn't a problem under real
skyboxes, but in cases were SURF_SKY is abused fpr interior lightning.
rmine2.bsp in rogue is a good place to see the problem
Reported by @m-x-d, fixes#393.
At least with MinGW on Windows vsnprintf() treats buffer < size as an
error, returning -1 instead of the number of characters that would have
been printed without size restrictions. Therefor msgLen may be wrong,
leading to all kind of funny mistakes further down below... Buffer
overflow included. Work around this by handling the msgLen < 0 case and
adding an explicit terminating \0.
This is another case of "I wonder why nobody has never noticed this",
the GL1 renderers extension string triggered the buffer overflow each
time the game started.
I guess it makes sense to apply gamma to the color, we do the same
for the standard round particles.
Also, this way the fragment shader for square particles references the
uniCommon UBO (gamma is part of it) - apparently the Intel HD4000
(from Ivy Bridge) GPU driver for Windows has a bug that uniform blocks
that exist in the shader source but aren't actually used can't be found
(with glGetUniformBlockIndex(prog, name)), which we treat as an error
in gl3_shaders.c initShader3D().
fixes#391
If the vsync is enabled missuse it to slow the client down, e.g.
calculate the target framerate, add an security margin of 20% and
let the vsync handle the rest. This hopefully solves some problems
with frametime spikes. This is an idea by @DanielGibson.
If the vsync is disabled use a simple 1s / fps calculation.
GetSystemTimeAsFileTime() is okay as long as the game runs fullscreen.
For some reasons it's resolution degraded to ~16ms as soon as the game
runs widowed... Better use GetPerformanceCounter(), its more reliable
and the recommended API for timecounters.
Apparently the lightsource for exploding rockets/grenades is very close
to the surface, so the dot-Product between surface-normal and the
vector between the light and the pixel returns 0, basically disabling
the dynamic light for that surface.
As a workaround, move the lightposition (only for that dot product)
a bit above the surface, 32*surfaceNormal looks good.
fixes#386
The real needed size can't be derived from the .bsp file size, because
* many generated structs contain pointers
* there's lots of data generated per face..
* _especially_ for warped faces that are subdivided
introduced FS_GetFilenameForHandle(fileHandle_t) for this
this helps if a map has been started with "wrong" case, which doesn't
immediately fail if it has been loaded from a pack, but will result
in invalid savegame names that (with case-sensitive FSs) will fail to
load (when going back to a formerly played level)
The r1q2 code prioritized pak files over all other files, e.g. as soon
as a pak file was requested no more file were added to the download
queue until it finished downloading. That way one could be sure that
assets included in the pak file weren't downloaded in parallel as single
files.
This is a better, bugfixed and more robust implementation of the same
logic. With this back in place we can switch back to parallel downloads
which gives a nice speedup on Windows. Maybe, just maybe some day
Microsoft will fix Windows crappy I/O...
Working with getter and setters was a good idea as long as we had one or
two quirks. Now we're at three with maybe more to come so it's easier to
use a struct to communicate quirks between the precacher and the HTTP
download code.
The stores it's text in the key_lines array which is NUM_KEY_LINES *
MAXCMDLINE chars long. The code never checked for overflows, it just
assumed that a line will never be longer then 256 chars * 8 = 2048
pixel. With modern displays we can have higher vertical resolutions,
so the array will overflow sooner or later.
Fix it by clamping the maximum line width to MAXCMDLINE - 2 chars (1
for the prompt and 1 for the terminating \0). While at it increase
MAXCMDLINE to 1024 chars * 8 = 8192 pixel, which is more then 8k
resolution and should be enough for the years to come.
This is belived tot fix at least a part of issue #368.
While loading a savegame the global edict arrays is free()ed and newly
malloc()ed to reset all entity states. When the game puts the first
client into the server it sends it's entity state to us, so as long as
there's only one client everything's okay. But when there're more
clients the entity states if all clients >1 are dangeling. Hack around
that by reconnecting the clients >1 entity states "manually".
I'm not 100% sure if this is okay for q2pro, but at least in my simple
tests r1q2, q2pro and now yq2 generate the same URL. Nevertheless it's
somewhat inconssistent to search generic files at /moddir/... and the
filelist at /moddir.filelist
This closes issue #370.