Apparently the shader math is not precise enough to ensure that two supposedly orthogonal vectors are truly orthogonal, resulting in a non-zero dot product
I missed this part when repurposing the vboindex members to store the index buffer offsets.
However, since both indices are needed, they need another set of variables.
To reduce the performance impact, legacy mode will now always create flat vertex data on the fly instead of relying on the vertex buffer. This makes the CVAR mostly redundant as on anything more modern rendering per subsector will always be slower.
- rename gl_InitModels to InitModels
- add commented out support for #include in modeldefs (blocked by gene tech having broken #include statements in its modeldef files)
The precise way the clipper needs to be maintained may differ between APIs, so it is no longer owned by any render structure but instead HWDrawInfo only contains a reference.
For OpenGL there is still only one static clipper because without multithreaded BSP traversal there is no need for more.
Not only are they better placed in the common code, but they are also both per-viewpoint and not per-scene, so this is a far more suitable place and avoids saving and restoring them in the portal code.
In this case there are no means to discard the parts of the rendered sectors that lie behind the portal so it should only render the parts that are flagged as visible.
This will require some comparisons on older hardware. On my Geforce 1060 rendering the full plane with one draw call is clearly faster in all cases I tested.
At least on faster NVidia hardware, setting this to false and gl_finishbeforeswap to true gives a better experience because it reduces screen tearing - but the same setting will reduce frame rate quite dramatically on Intel and can cause bad stalls on some older GPUs when rendering camera textures.
On a fast and modern graphics card this is a lot faster than doing it per subsector but it may not be without drawbacks on older hardware so it will require some testing on older hardware.
For me Frozen Time's view over the bridge went from 46 fps to 51 fps with this change, the time saved was roughly 2 ms.
src/r_data/models/models.cpp:418:33: warning: comparison of integers of different signs: 'long' and 'unsigned long' [-Wsign-compare]
src/r_data/models/models.cpp:427:38: warning: comparison of integers of different signs: 'long' and 'unsigned long' [-Wsign-compare]
src/r_data/models/models_ue1.cpp:49:37: warning: comparison of integers of different signs: 'long' and 'unsigned long' [-Wsign-compare]
I did not consider that this is an init-only option. So changing the CVAR may not affect game behavior at all. Instead its value must be moved to some globally accessible variable on startup that never gets changed again.
- made the vid_scaleto____ commands less hacky - after finding out I could route the calls through screen->, found the correct screen-> commands, and do scaling based on the real screen dimensions
Game code should never ever call the renderer directly. This must be done through the video interface so that it can also work with other framebuffers later.