The most frequent call using this is the regular texture creation function where this results in a pointless allocation and destruction of a temporary string which is easily avoided.
- with renderers freely switchable, some shortcuts in the 3D floor code had to be removed, because now the hardware renderer can get FF_THISINSIDE-flagged 3D floors.
- changed handling of attenuated lights in the legacy renderer to be adjusted when being rendered instead of when being spawned. For the software renderer the light needs to retain its original values.
This does not work with a setup where the same backend is driving both renderers.
Most of this is now routed through 'screen', and the decision between renderers has to be made inside the actual render functions.
The software renderer is still driven by a thin opaque interface to keep it mostly an isolated module.
This will cause incorrectly generated textures and the reason for this no longer exists because CreateTexBuffer is doing this as a postprocessing step now.
This theoretically means that the software renderer could access this data as well - if it just had been written with a more flexible texture interface.
However, as things stand, this may require quite a bit of work to achieve.
The old organization made sense when ZDoom still was a thing but now it'd be better if all pure data with no dependence on renderer implementation details was moved out.
A separation between GL2 and GL3+4 renderers looks to be inevitable and the more data is out of the renderer when that happens, the better.
No more locking insanity! :)
There are no locking counters or other saveguards here that would complicate the implementation because there's precisely two places where this buffer must be locked - the RenderView functions of the regular and poly SW renderer which cannot be called recursively.
In its current form this is quite useless. What's really needed is to require a lock on the RenderBuffer for the 3D scene, but since this is not needed for the 2D stuff anymore it can be done far simpler.
This was a bad idea from the start and really only made sense with DirectDraw.
These days a FrameBuffer represents an abstract hardware canvas that shares nothing with a software canvas so having these classes linked together makes things needlessly complicated.
The software render buffer is now a canvas object owned by the FrameBuffer.
Note that this commit deactivates a few things in the software renderer, but from the looks of it none of those will be needed anymore if we set OpenGL 2 as minimum target.
This is a necessary first step for simplifying the texture handling in order to refactor it.
# Conflicts:
# src/gl/system/gl_swframebuffer.cpp
# src/textures/textures.h
# src/win32/fb_d3d9.cpp
These cannot be done with the regular textures so there needs to be an option to create more than one native texture per FTexture. For completeness' sake there is also the option now to create a paletted version of a texture if the regular one is true color. This fixes a long standing problem that translations were not applied to non-paletted textures.
* store the frame time in the current screen buffer from where all render code can access it.
* replace some uses of I_MSTime with I_FPSTime, because they should not use a per-frame timer. The only one left is the wipe code but even this doesn't look like it needs either a per-frame timer or a timer counting from the start of the playsim.