One problem was that GL3_Shutdown() called several functions that use
that gl* function pointers - not a good idea if InitContext() failed
and the function pointers are all NULL. So check for that.
Similarly in GL3/R_ShutdownWindow() calling glClear() etc.
Another problem was that R_SetMode() would, if R_SetMode_impl() failed,
try again with a "safe" resolution (640x480 unless we had another
working resolution before) - which is bad if we're already using that
"safe" resolution because then GLimp_InitGraphics() would check mode
and fullscreen and decide it hasn't changed and do nothing and return
true, which would make SetMode() believe everything is fine and
afterwards all hell breaks loose.
This makes the fragment shader faster by skipping lights that haven't
marked this surface in GL3_MarkLights()
This seems to improve performance at least slightly everywhere, but
it really helps *a lot* on integrated intel GPUs like the one on their
Sandy Bridge, Ivy Bride and Haswell CPUs (those are the ones we tested).
adding dot(surfaceNormal, lightToPixelOnSurfaceNormal) to the equation,
should be Phong-y now? Looks good at least.
The Windows AMD legacy driver needed its usual manual padding..
OSX was totally weird.. There were no errors or warnings from OpenGL
at all, but the dynamic lights were just not visible.
After (too long) debugging the shader I figured out that
dynLights[i].lightIntensity was always 1, and thus
'dynLights[i].lightIntensity - distLightToPos - 64' was negative and set
to 0 with max(0, ...).
I still have no idea why that happens, but removing lightIntensity from
the struct, making lightColor a vec4 and using .a for intensity works...
Dynamic lights on normal world brushes work, on brush-based entities
probably not yet properly. For this I need the model matrix in the
shader to transform vertex positions and normals to worldspace
(they already are for world brushes, but not entities that might rotate
and move etc).
Furthermore, while they dynamic lights look nice and smooth they might
need some fine tuning in the shader..
For this to work there are two bigger changes:
* the vertex data for brushes (gl3_3D_vtx_t) now also contains the
vertex normal
- glpoly_t contains array of gl3_3D_vtx_t instead of 7 floats
* 3D shaders now have in vec3 normal, bound to GL3_ATTRIB_NORMAL
* There's a new UBO for light data: uniLights, containing an array of
up to 32 dynamic lights, with data copied from gl3_newrefdef.dlights
..with Radeon 6950 using AMDs legacy driver.
For uploading UBOs it turned out that glBufferData() is faster,
sometimes a lot faster, with several drivers, especially Intel/OSX.
if the lightmap textures are 1024x512 instead of 128x128, all original
Q2 levels will only need one lightmap texture (instead of max 26 or so)
and even maps that needed all 127 (the 128th was the dynamic lightmap)
won't need more than 4.
This should result in less glBindTexture() calls.
(Note: When I wrote "1 lightmap texture" I meant 4, because where the
old renderer dynamically blended the up to 4 lightmaps/surface, I put
them in 4 textures that belong together and are alle passed to and
blended in the fragment shader)
not sure if this is the very best solution..
Every surface can have up to 4 lightmaps.
I now always create 4 lightmaps (in separate textures, so the
corresponding texture coordinates are identical), the "fillers" are
set to 0, so in the shader they won't make a visible difference.
(The shader always adds up lightmaps from 4 textures, but how much
they're actually visible depends on lmScales which also will be set to
0 if "unused")
If all this turns out to be (too) slow, there could be a special case
for surfaces with only one lightmap, I /think/ that's the most common
case by far.
adjusted to new GL3 stuff, of course.
Also, more code for light style/lightmap scale support.
Should probably (maybe) work once we have really have 4 textures
per lightmap id.
(And then of course dynamic lights are still missing)
the screenshot command now supports the filetype as optional argument
(just "screenshot" will use tga like before):
"screenshot png" will save the screenshot as PNG, same with jpg, png
and tga.
For jpg, you can even specify the quality, like "screenshot jpg 90"
(the Quality is between 1 and 100, like with libjpeg).
To reduce duplicated code, I addeed Vid_WriteScreenshot() to refimport_t
and implement most of it in the client (vid.c).
The renderer still fetches the raw image data from OpenGL or whatever
and then calls re.VidWriteScreenshot() which will write it to disk in
the format requested by the user.
pass both normal texture and lightmap to shader instead of rendering the
level geometry again with the lightmap and GL_BLEND.
This is not done, some translucent surfaces are buggy now and it's only
static lightmaps. For this to work properly I'll need to add some more
shaders with and without lightmaps and use them accordingly.
For example, translucent surfaces (SURF_TRANS33/66) never have
lightmaps, neither to pertubed ones (SURF_DRAWTURB) like water and lava,
but scrolling surfaces (SURF_FLOWING) like elevators do use lightmaps
(as long as they're not also transulcent or perturbed)...
well, seems to work, but once the lightmaps are rendered with the normal
textured faces, maybe the dynamic part can be done in shader?
(Might even look less blocky, because it's not limited to lightmap
resolution then)
that struct can/should be used with gl3state.vao3D which expects data
as 7 floats (x,y,z, s,t, lms,lmt) - this is used for brushes, sprites,
the sky and more (not for models though).
(For rendering brushes the struct isn't used, the data already is in
that format in float arrays)
beams are now rendered, they are used by the BFG and in some levels for
lasers etc
which just called R_PolyBlend().. no idea what that all was about..
and no idea why it happend in 3D mode, it's much easier after
SetGL2D(), then it can share code with GL3_Draw_FadeScreen() and doesn't
need any additional messing with transformation matrices
* The particles look more fuzzy than in old renderer - I think it looks
better this way ;)
* Not sure I keep the way they're rendered - instead of calculating and
passing the distance in GL3_ShutdownShaders() I could set the player
(camera) origin in a UBO and calculate distance (and based on that
the size) in the vertex shader. I could also pass the basic point size
via UBO, it's the same for all particles..
* Deleting shader programs is a lot shorter now and using a loop and
the fact that consecutive fields of the same type in a struct have
the same memory layout as an array of that type.
Now I can just add a gl3ShaderInfo_t to gl3state, set it up in
GL3_InitShaders() and don't have to add anything to
GL3_ShutdownShaders() (this is good, I forgot that all the time and
didn't notice, as it doesn't cause visible errors)
* Of course the color attribute has 4 floats, not 2..
* I read that updating UBOs with glBufferData() is kinda slow.
I didn't change that (yet), but at least all three GL3_UpdateUBO*()
functions now call updateUBO() which can easily be changed to do
whatever is best without touching the other three functions.
* When using gl_pointparameters, the particles always had the same size
regardless of resolution, i.e. they look bigger (use bigger part of
screen) at lower resolutions. Now I scale gl_particle_size according
to the resolution, assuming the configured size looks good at 800x600
(or generally 600px vertical)
* When not using gl_pointparameters, a textured triangle is rendered.
The texture had a resolution of 8x8 pixels and looked like a cross,
now it's 16x16 and has rounded ages, looking more like a circle.
So particles with "gl_pointparameters 0" should look much better now.
turns out R_MYgluPerspective() was not the same as gluPerspective()
and thus not equivalent to HMM_Perspective() either. Because of this,
the weapon and corresponding arm looked different in GL3 vs GL1.
Created GL3_MYgluPerspective() to fix that.
Also tested optimized code in GL3_RotateForEntity() and
rotAroundAxisZYX(), use this code from now on and cleaned it all up by
removing commented out code.
that was easy..
however, not related to this change or left vs right hand, the gun
seems to be drawn too far back, we should see more of the arm..
I wonder where that went wrong...
introducing vertex color attributes GL3_ATTRIB_COLOR (it's used for
lighting models and to render models flat-colored, I think that's used
for quad damage effect and similar)
kind of messy commit with all the shit from last weekend, finished now
Most importantly the common vertex attribute layout stuff using
glBindAttribLocation()