Sometimes cinematics are skipped after the first frame even if the
player didn't press any key. I'm unable to reliable reproduce that,
so my educated guess is that one or more events are still waiting in
SDLs event queue.
For example, during intermission IN_Update() is not called for 5
seconds, key presses by impatient players are just added to the queue
and not processed. The first event is used to skip leave the
intermission, the second event skips the cinematic...
Fix this by implementing a new function IN_FlushQueue() to flush SDLs
event queue and calling it when starting cinematic playback. Yes, this
is just another layer violation. :(
When the client is paused (either explicit or by entering the menu or
console) the cinematic is paused, too. Therefor no more sound samples
are generated and added to the playback queue, the existing samples are
played over and over again. Until now these samples weren't hearable,
because OpenAL marked them as processed and AL_StreamUpdate() removed
them from OpenALs playback queues. This changed in the previous commit,
now the stay in OpenALs queue and are hearable.
Fix this by calling AL_UnqueueRawSamples() when the menu or console is
entered during cinematic playback.
Newer openal-soft versions changed the way how the processed buffers are
counted when in AL_STOPPED state. Previously only processed buffers were
counted, now all buffers are. Change our unqueue logic to match this new
behavior.
This was debugged and fixed @xorw, I'm just committing the patch. This
closes issue #185.
For some reasons setting the MSAA fails at window creation and not at
GL context creation. And of course SDL is unable to detect before, that
the requested number of MSAA samples is invalid... Implement a work
around: Fall back to gl_msaa_samples == 0 if the window cannot be
created.
The CMakeLists.txt used the same linker flags for all target, grossly
overlinking q2ded and both render libraries. Fix this by introducing
fine grained variables holding the linker flags.
Resurrect support for render / refresher loadable libraries and use them to implement an experimental OpenGL 3.2 renderer. Please note that the new renderer interface is somewhat different from the original one, old render libraries will NOT work!
there's a target_secret (with targetname "t117"), but no one triggers
it - that's why the help computer shows four secrets, but you can only
get three of them.
Now when you open the door in front of the hidden secret armor
(by shooting it), it'll trigger the target_secret and you can get all
four secrets.
fixes#182
The internal order of the items is determined by Menu_AddItem() and
not the y position. Without this change the cursor didn't jump from
item to item, but from the mode list box to the aspect list box,
skipping the brightness slider.
- Bump vid_gamma to 1.2 in both GL1 and GL3. A default value of 1.0 is
too dark.
- Lower gl3_overbrightbits to 1.3, the previous value of 1.5 was too
bright. This can be seen in later units, for example on mine1 some
textures blended into white.
- Lower gl3_particle_size to 40. A value of 60 may be okay, but with
gl3_particle_fade_factor 1.2 the particles take up too much screen
estate in close range combat.
With this changes GL3 looks (at least for me) nearly the same as GL1
rendered through the removed multitexturing path.
Without capping the brightness entity models may fade into pure white
which looks ugly. This can be seen when several flyer fire blaster
bolds onto the player or when multiple barrels are exploding. This
change was suggest by @DanielGibson, I'm just the messenger.
I really don't see why this constraint was ever necessary. It leads to
one line of pixels not rendered either at the bottom or the right edge
of the screen. In GL1 for whatever reasons this line is just black, in
GL3 garbage is drawn.
In SCR_DrawFieldScaled() the HUD scale factor wasn't taken into account
when calculating the screen area affected by the change. Therefor wrong
coordinates were passed to SCR_AddDirtyPoint() and a part of the changed
area wasn't marked dirty, leading to render artifacts.
This bug was present since HUD scaling was first introduced.
I hope that I've referenced all headers required by the libraries. If
I missed some compilation will work but IDEs like Clion won't be able
to deduce all symbols.
Before this change cmake overlinked the q2ded binary and the game.so
game library. Now it is also the case with ref_gl1.so and ref_gl3. This
will be fixed in a later commit.
the flash should only be drawn in the part of the window where actual
3D rendering happens, not in the borders added if viewsize < 100
(and apparently also for with 1 pixel width if the resolution is odd).
* gl3_particle_size: in GL3 the particles should be a bit bigger because
the particles fade out towards the edge, so I put it in a seperate
CVar
* gl3_intensity: in GL3 the intensity can have any floating point value,
in GL1 only integers, so it gets its own CVar
* gl3_overbrightbits: gl_overbrightbits had to be 1, 2 or 4, in GL3 it
can have any floating point value.
Changed the particle scaling a bit so they look bigger.
One problem was that GL3_Shutdown() called several functions that use
that gl* function pointers - not a good idea if InitContext() failed
and the function pointers are all NULL. So check for that.
Similarly in GL3/R_ShutdownWindow() calling glClear() etc.
Another problem was that R_SetMode() would, if R_SetMode_impl() failed,
try again with a "safe" resolution (640x480 unless we had another
working resolution before) - which is bad if we're already using that
"safe" resolution because then GLimp_InitGraphics() would check mode
and fullscreen and decide it hasn't changed and do nothing and return
true, which would make SetMode() believe everything is fine and
afterwards all hell breaks loose.
This makes the fragment shader faster by skipping lights that haven't
marked this surface in GL3_MarkLights()
This seems to improve performance at least slightly everywhere, but
it really helps *a lot* on integrated intel GPUs like the one on their
Sandy Bridge, Ivy Bride and Haswell CPUs (those are the ones we tested).
adding dot(surfaceNormal, lightToPixelOnSurfaceNormal) to the equation,
should be Phong-y now? Looks good at least.
The Windows AMD legacy driver needed its usual manual padding..
OSX was totally weird.. There were no errors or warnings from OpenGL
at all, but the dynamic lights were just not visible.
After (too long) debugging the shader I figured out that
dynLights[i].lightIntensity was always 1, and thus
'dynLights[i].lightIntensity - distLightToPos - 64' was negative and set
to 0 with max(0, ...).
I still have no idea why that happens, but removing lightIntensity from
the struct, making lightColor a vec4 and using .a for intensity works...
Dynamic lights on normal world brushes work, on brush-based entities
probably not yet properly. For this I need the model matrix in the
shader to transform vertex positions and normals to worldspace
(they already are for world brushes, but not entities that might rotate
and move etc).
Furthermore, while they dynamic lights look nice and smooth they might
need some fine tuning in the shader..
For this to work there are two bigger changes:
* the vertex data for brushes (gl3_3D_vtx_t) now also contains the
vertex normal
- glpoly_t contains array of gl3_3D_vtx_t instead of 7 floats
* 3D shaders now have in vec3 normal, bound to GL3_ATTRIB_NORMAL
* There's a new UBO for light data: uniLights, containing an array of
up to 32 dynamic lights, with data copied from gl3_newrefdef.dlights
..with Radeon 6950 using AMDs legacy driver.
For uploading UBOs it turned out that glBufferData() is faster,
sometimes a lot faster, with several drivers, especially Intel/OSX.
if the lightmap textures are 1024x512 instead of 128x128, all original
Q2 levels will only need one lightmap texture (instead of max 26 or so)
and even maps that needed all 127 (the 128th was the dynamic lightmap)
won't need more than 4.
This should result in less glBindTexture() calls.
(Note: When I wrote "1 lightmap texture" I meant 4, because where the
old renderer dynamically blended the up to 4 lightmaps/surface, I put
them in 4 textures that belong together and are alle passed to and
blended in the fragment shader)
not sure if this is the very best solution..
Every surface can have up to 4 lightmaps.
I now always create 4 lightmaps (in separate textures, so the
corresponding texture coordinates are identical), the "fillers" are
set to 0, so in the shader they won't make a visible difference.
(The shader always adds up lightmaps from 4 textures, but how much
they're actually visible depends on lmScales which also will be set to
0 if "unused")
If all this turns out to be (too) slow, there could be a special case
for surfaces with only one lightmap, I /think/ that's the most common
case by far.