The binding point needs to be part of the ShaderUniforms class because Vulkan will need this value to generate the declaration in the shader source.
There's still one issue here: Since OpenGL has no local render state, the buffer must be bound every time it is used. Once the code is better abstracted this should be moved to a higher level class that knows all associated data and how to set it up.
To reduce the performance impact, legacy mode will now always create flat vertex data on the fly instead of relying on the vertex buffer. This makes the CVAR mostly redundant as on anything more modern rendering per subsector will always be slower.
These files are not part of the actual renderer but part of the system code.
This means, for separated modern and legacy GL renderers, there still will only be one set of this, unlike everything else.
* to do this efficiently the amount of required vertices needs to be calculated up-front
* always create the vertices in the data generation pass, not the render pass.
* added synchronisation code to the vertex buffer allocator.
Without multithreading this causes a slight slowdown, due to added processing cost. (Frozen Time bridge scene drops from 47 fps to 44 fps on my test machine.
- with renderers freely switchable, some shortcuts in the 3D floor code had to be removed, because now the hardware renderer can get FF_THISINSIDE-flagged 3D floors.
- changed handling of attenuated lights in the legacy renderer to be adjusted when being rendered instead of when being spawned. For the software renderer the light needs to retain its original values.
* the MAPINFO options now get handled in g_mapinfo.cpp and g_level.cpp, just like the rest of them as members of level_info_t and FLevelLocals.
* RecalcVertexHeights has been made a member of vertex_t and been moved to p_sectors.cpp.
* the dumpgeometry CCMD has been moved to p_setup.cpp
The old organization made sense when ZDoom still was a thing but now it'd be better if all pure data with no dependence on renderer implementation details was moved out.
A separation between GL2 and GL3+4 renderers looks to be inevitable and the more data is out of the renderer when that happens, the better.
This was originally invented to fix the sprite offsets for the hardware renderer.
Changed it so that it doesn't override the original offsets but acts as a second set.
A new CVAR has been added to allow controlling the behavior per renderer.
This was done mainly to reduce the amount of occurences of the word FTexture but it immediately helped detect two small and mostly harmless bugs that were found due to the stricter type checks.
* store the frame time in the current screen buffer from where all render code can access it.
* replace some uses of I_MSTime with I_FPSTime, because they should not use a per-frame timer. The only one left is the wipe code but even this doesn't look like it needs either a per-frame timer or a timer counting from the start of the playsim.
- consolidated the code to calculate a sprite's display angle for all 3 renderers.
As it turned out, they all differed in their feature support because they had always been updated independently by different people.