not sure if this is the very best solution..
Every surface can have up to 4 lightmaps.
I now always create 4 lightmaps (in separate textures, so the
corresponding texture coordinates are identical), the "fillers" are
set to 0, so in the shader they won't make a visible difference.
(The shader always adds up lightmaps from 4 textures, but how much
they're actually visible depends on lmScales which also will be set to
0 if "unused")
If all this turns out to be (too) slow, there could be a special case
for surfaces with only one lightmap, I /think/ that's the most common
case by far.
adjusted to new GL3 stuff, of course.
Also, more code for light style/lightmap scale support.
Should probably (maybe) work once we have really have 4 textures
per lightmap id.
(And then of course dynamic lights are still missing)
To be able to test if the game is running portable all checks of the
portable cvar must be done after Cvar_Init(). Instead of redirecting
stdout and stderr as early as possible, delay the redirection right
after Cvar_Init(). After this change the printf() in WinMain() aren't
printed into stdout.txt, but I guess that it isn't a big problem. All
interessting stuff like the search pathes is still there.
Rename fs_portable to portable. It's no longer filesystem specific.
Normally Q2 writes all persistent data (the configurations, saves, etc.)
into a subdirectory in the users $HOME. That can be a problem when the
game is installed onto an thumb drive or something like that. Therefor
provide a cvar fs_portable. When set to 1 the games uses it binary dir
as it's persistent storage location.
Examples:
./quake2 +set fs_portable 1
./quake2 +set basedir ~/games/quake2 +set fs_portable 1
fs_portable is _not_ saved into the config file. It must be set at
every start!
This closed issue #158.
Some people complained about the usage of non busy waits:
* I was told that there's an input lag with nanosleep(). I still doubt
that, but since the problem is easy to solve...
* Some Intel CPUs throttle the GPU if the selected CPU pstate is too
low. This is especially a problem on Haswell mobile CPUs. Keeping
a core busy works around that.
the screenshot command now supports the filetype as optional argument
(just "screenshot" will use tga like before):
"screenshot png" will save the screenshot as PNG, same with jpg, png
and tga.
For jpg, you can even specify the quality, like "screenshot jpg 90"
(the Quality is between 1 and 100, like with libjpeg).
To reduce duplicated code, I addeed Vid_WriteScreenshot() to refimport_t
and implement most of it in the client (vid.c).
The renderer still fetches the raw image data from OpenGL or whatever
and then calls re.VidWriteScreenshot() which will write it to disk in
the format requested by the user.
pass both normal texture and lightmap to shader instead of rendering the
level geometry again with the lightmap and GL_BLEND.
This is not done, some translucent surfaces are buggy now and it's only
static lightmaps. For this to work properly I'll need to add some more
shaders with and without lightmaps and use them accordingly.
For example, translucent surfaces (SURF_TRANS33/66) never have
lightmaps, neither to pertubed ones (SURF_DRAWTURB) like water and lava,
but scrolling surfaces (SURF_FLOWING) like elevators do use lightmaps
(as long as they're not also transulcent or perturbed)...
well, seems to work, but once the lightmaps are rendered with the normal
textured faces, maybe the dynamic part can be done in shader?
(Might even look less blocky, because it's not limited to lightmap
resolution then)
that struct can/should be used with gl3state.vao3D which expects data
as 7 floats (x,y,z, s,t, lms,lmt) - this is used for brushes, sprites,
the sky and more (not for models though).
(For rendering brushes the struct isn't used, the data already is in
that format in float arrays)
beams are now rendered, they are used by the BFG and in some levels for
lasers etc
which just called R_PolyBlend().. no idea what that all was about..
and no idea why it happend in 3D mode, it's much easier after
SetGL2D(), then it can share code with GL3_Draw_FadeScreen() and doesn't
need any additional messing with transformation matrices
* The particles look more fuzzy than in old renderer - I think it looks
better this way ;)
* Not sure I keep the way they're rendered - instead of calculating and
passing the distance in GL3_ShutdownShaders() I could set the player
(camera) origin in a UBO and calculate distance (and based on that
the size) in the vertex shader. I could also pass the basic point size
via UBO, it's the same for all particles..
* Deleting shader programs is a lot shorter now and using a loop and
the fact that consecutive fields of the same type in a struct have
the same memory layout as an array of that type.
Now I can just add a gl3ShaderInfo_t to gl3state, set it up in
GL3_InitShaders() and don't have to add anything to
GL3_ShutdownShaders() (this is good, I forgot that all the time and
didn't notice, as it doesn't cause visible errors)
* Of course the color attribute has 4 floats, not 2..
* I read that updating UBOs with glBufferData() is kinda slow.
I didn't change that (yet), but at least all three GL3_UpdateUBO*()
functions now call updateUBO() which can easily be changed to do
whatever is best without touching the other three functions.
* When using gl_pointparameters, the particles always had the same size
regardless of resolution, i.e. they look bigger (use bigger part of
screen) at lower resolutions. Now I scale gl_particle_size according
to the resolution, assuming the configured size looks good at 800x600
(or generally 600px vertical)
* When not using gl_pointparameters, a textured triangle is rendered.
The texture had a resolution of 8x8 pixels and looked like a cross,
now it's 16x16 and has rounded ages, looking more like a circle.
So particles with "gl_pointparameters 0" should look much better now.
turns out R_MYgluPerspective() was not the same as gluPerspective()
and thus not equivalent to HMM_Perspective() either. Because of this,
the weapon and corresponding arm looked different in GL3 vs GL1.
Created GL3_MYgluPerspective() to fix that.
Also tested optimized code in GL3_RotateForEntity() and
rotAroundAxisZYX(), use this code from now on and cleaned it all up by
removing commented out code.
that was easy..
however, not related to this change or left vs right hand, the gun
seems to be drawn too far back, we should see more of the arm..
I wonder where that went wrong...
introducing vertex color attributes GL3_ATTRIB_COLOR (it's used for
lighting models and to render models flat-colored, I think that's used
for quad damage effect and similar)
kind of messy commit with all the shit from last weekend, finished now
Most importantly the common vertex attribute layout stuff using
glBindAttribLocation()
still no 3D rendering, but in theory it should be able to load models,
bsps etc, just not render them yet.
also moved/copied md2.c and sp2.c to gl/ and gl3/ because they use
renderer-specific types
only for 2D rendering, as we don't have 3D yet; also this might need
a more flexible solution later, as some textures are not supposed to
have intensity applied.
According to the old R_Upload32*() and R_LightScaleTexture() the ones
without mipmaps didn't get intensity. Those were it_pic and it_sky (and
the ones in the "scrap", but those were it_pic too)