This gets exclusively used by portal borders which means that for walls the setting is irrelevant but for flats it is needed to cover the portal surface so that translucent parts of the outer scene do not bleed through.
src/gl/scene/gl_flats.cpp:215:3: error: cannot jump from this goto statement to its label
src/r_data/models/models.cpp💯18: error: no member named 'floor' in namespace 'std'
This wasn't set up properly anymore because the new index-based buffer code is not efficient on GL2, but the render function forgot to skip the buffer checks and jump right to the fallback path.
I missed this part when repurposing the vboindex members to store the index buffer offsets.
However, since both indices are needed, they need another set of variables.
To reduce the performance impact, legacy mode will now always create flat vertex data on the fly instead of relying on the vertex buffer. This makes the CVAR mostly redundant as on anything more modern rendering per subsector will always be slower.
The precise way the clipper needs to be maintained may differ between APIs, so it is no longer owned by any render structure but instead HWDrawInfo only contains a reference.
For OpenGL there is still only one static clipper because without multithreaded BSP traversal there is no need for more.
Not only are they better placed in the common code, but they are also both per-viewpoint and not per-scene, so this is a far more suitable place and avoids saving and restoring them in the portal code.
In this case there are no means to discard the parts of the rendered sectors that lie behind the portal so it should only render the parts that are flagged as visible.
This will require some comparisons on older hardware. On my Geforce 1060 rendering the full plane with one draw call is clearly faster in all cases I tested.
On a fast and modern graphics card this is a lot faster than doing it per subsector but it may not be without drawbacks on older hardware so it will require some testing on older hardware.
For me Frozen Time's view over the bridge went from 46 fps to 51 fps with this change, the time saved was roughly 2 ms.
I did not consider that this is an init-only option. So changing the CVAR may not affect game behavior at all. Instead its value must be moved to some globally accessible variable on startup that never gets changed again.
Game code should never ever call the renderer directly. This must be done through the video interface so that it can also work with other framebuffers later.
These files are not part of the actual renderer but part of the system code.
This means, for separated modern and legacy GL renderers, there still will only be one set of this, unlike everything else.