I really don't see why this constraint was ever necessary. It leads to
one line of pixels not rendered either at the bottom or the right edge
of the screen. In GL1 for whatever reasons this line is just black, in
GL3 garbage is drawn.
In SCR_DrawFieldScaled() the HUD scale factor wasn't taken into account
when calculating the screen area affected by the change. Therefor wrong
coordinates were passed to SCR_AddDirtyPoint() and a part of the changed
area wasn't marked dirty, leading to render artifacts.
This bug was present since HUD scaling was first introduced.
I hope that I've referenced all headers required by the libraries. If
I missed some compilation will work but IDEs like Clion won't be able
to deduce all symbols.
Before this change cmake overlinked the q2ded binary and the game.so
game library. Now it is also the case with ref_gl1.so and ref_gl3. This
will be fixed in a later commit.
the flash should only be drawn in the part of the window where actual
3D rendering happens, not in the borders added if viewsize < 100
(and apparently also for with 1 pixel width if the resolution is odd).
* gl3_particle_size: in GL3 the particles should be a bit bigger because
the particles fade out towards the edge, so I put it in a seperate
CVar
* gl3_intensity: in GL3 the intensity can have any floating point value,
in GL1 only integers, so it gets its own CVar
* gl3_overbrightbits: gl_overbrightbits had to be 1, 2 or 4, in GL3 it
can have any floating point value.
Changed the particle scaling a bit so they look bigger.
One problem was that GL3_Shutdown() called several functions that use
that gl* function pointers - not a good idea if InitContext() failed
and the function pointers are all NULL. So check for that.
Similarly in GL3/R_ShutdownWindow() calling glClear() etc.
Another problem was that R_SetMode() would, if R_SetMode_impl() failed,
try again with a "safe" resolution (640x480 unless we had another
working resolution before) - which is bad if we're already using that
"safe" resolution because then GLimp_InitGraphics() would check mode
and fullscreen and decide it hasn't changed and do nothing and return
true, which would make SetMode() believe everything is fine and
afterwards all hell breaks loose.
This makes the fragment shader faster by skipping lights that haven't
marked this surface in GL3_MarkLights()
This seems to improve performance at least slightly everywhere, but
it really helps *a lot* on integrated intel GPUs like the one on their
Sandy Bridge, Ivy Bride and Haswell CPUs (those are the ones we tested).
adding dot(surfaceNormal, lightToPixelOnSurfaceNormal) to the equation,
should be Phong-y now? Looks good at least.
The Windows AMD legacy driver needed its usual manual padding..
OSX was totally weird.. There were no errors or warnings from OpenGL
at all, but the dynamic lights were just not visible.
After (too long) debugging the shader I figured out that
dynLights[i].lightIntensity was always 1, and thus
'dynLights[i].lightIntensity - distLightToPos - 64' was negative and set
to 0 with max(0, ...).
I still have no idea why that happens, but removing lightIntensity from
the struct, making lightColor a vec4 and using .a for intensity works...
Dynamic lights on normal world brushes work, on brush-based entities
probably not yet properly. For this I need the model matrix in the
shader to transform vertex positions and normals to worldspace
(they already are for world brushes, but not entities that might rotate
and move etc).
Furthermore, while they dynamic lights look nice and smooth they might
need some fine tuning in the shader..
For this to work there are two bigger changes:
* the vertex data for brushes (gl3_3D_vtx_t) now also contains the
vertex normal
- glpoly_t contains array of gl3_3D_vtx_t instead of 7 floats
* 3D shaders now have in vec3 normal, bound to GL3_ATTRIB_NORMAL
* There's a new UBO for light data: uniLights, containing an array of
up to 32 dynamic lights, with data copied from gl3_newrefdef.dlights
..with Radeon 6950 using AMDs legacy driver.
For uploading UBOs it turned out that glBufferData() is faster,
sometimes a lot faster, with several drivers, especially Intel/OSX.
if the lightmap textures are 1024x512 instead of 128x128, all original
Q2 levels will only need one lightmap texture (instead of max 26 or so)
and even maps that needed all 127 (the 128th was the dynamic lightmap)
won't need more than 4.
This should result in less glBindTexture() calls.
(Note: When I wrote "1 lightmap texture" I meant 4, because where the
old renderer dynamically blended the up to 4 lightmaps/surface, I put
them in 4 textures that belong together and are alle passed to and
blended in the fragment shader)
not sure if this is the very best solution..
Every surface can have up to 4 lightmaps.
I now always create 4 lightmaps (in separate textures, so the
corresponding texture coordinates are identical), the "fillers" are
set to 0, so in the shader they won't make a visible difference.
(The shader always adds up lightmaps from 4 textures, but how much
they're actually visible depends on lmScales which also will be set to
0 if "unused")
If all this turns out to be (too) slow, there could be a special case
for surfaces with only one lightmap, I /think/ that's the most common
case by far.
adjusted to new GL3 stuff, of course.
Also, more code for light style/lightmap scale support.
Should probably (maybe) work once we have really have 4 textures
per lightmap id.
(And then of course dynamic lights are still missing)
To be able to test if the game is running portable all checks of the
portable cvar must be done after Cvar_Init(). Instead of redirecting
stdout and stderr as early as possible, delay the redirection right
after Cvar_Init(). After this change the printf() in WinMain() aren't
printed into stdout.txt, but I guess that it isn't a big problem. All
interessting stuff like the search pathes is still there.
Rename fs_portable to portable. It's no longer filesystem specific.
Normally Q2 writes all persistent data (the configurations, saves, etc.)
into a subdirectory in the users $HOME. That can be a problem when the
game is installed onto an thumb drive or something like that. Therefor
provide a cvar fs_portable. When set to 1 the games uses it binary dir
as it's persistent storage location.
Examples:
./quake2 +set fs_portable 1
./quake2 +set basedir ~/games/quake2 +set fs_portable 1
fs_portable is _not_ saved into the config file. It must be set at
every start!
This closed issue #158.
Some people complained about the usage of non busy waits:
* I was told that there's an input lag with nanosleep(). I still doubt
that, but since the problem is easy to solve...
* Some Intel CPUs throttle the GPU if the selected CPU pstate is too
low. This is especially a problem on Haswell mobile CPUs. Keeping
a core busy works around that.
the screenshot command now supports the filetype as optional argument
(just "screenshot" will use tga like before):
"screenshot png" will save the screenshot as PNG, same with jpg, png
and tga.
For jpg, you can even specify the quality, like "screenshot jpg 90"
(the Quality is between 1 and 100, like with libjpeg).
To reduce duplicated code, I addeed Vid_WriteScreenshot() to refimport_t
and implement most of it in the client (vid.c).
The renderer still fetches the raw image data from OpenGL or whatever
and then calls re.VidWriteScreenshot() which will write it to disk in
the format requested by the user.
pass both normal texture and lightmap to shader instead of rendering the
level geometry again with the lightmap and GL_BLEND.
This is not done, some translucent surfaces are buggy now and it's only
static lightmaps. For this to work properly I'll need to add some more
shaders with and without lightmaps and use them accordingly.
For example, translucent surfaces (SURF_TRANS33/66) never have
lightmaps, neither to pertubed ones (SURF_DRAWTURB) like water and lava,
but scrolling surfaces (SURF_FLOWING) like elevators do use lightmaps
(as long as they're not also transulcent or perturbed)...
well, seems to work, but once the lightmaps are rendered with the normal
textured faces, maybe the dynamic part can be done in shader?
(Might even look less blocky, because it's not limited to lightmap
resolution then)