* re-added screen blends for images from the hardware renderer.
* moved all postprocessing of the image out of the renderers.
* cleaned out a large piece of cruft for handling the palette in the frame buffer class. This was all a remnant of the old paletted backend that no longer exists. Nowadays the screen blend is just a postprocessing effect drawn over the 3D screen, there is no need to maintain any of it as global state anymore.
* since the engine doesn't produce paletted screenshots anymore there is no need to have handling for it in the generation code. This depended on otherwise obsolete information so it got removed along with that information.
With localization for non-Latin languages on the support list the multibyte API doesn't cut it anymore. It neither can handle system text output outside the local code page nor can an ANSI window receive text input outside its own code page.
Similar problems exist for file names. With the multibyte API it is impossible to handle any file containing characters outside the active local code page.
So as of now, everything that may pass along some Unicode text will use the Unicode API with some text conversion functions. The only places where calls to the multibyte API were left are those where known string literals are passed or where the information is not used for anything but comparing it to other return values from the same API.
It now reads everything into a two-dimensional TMap and creates a list of mappings that apply to the current setting.
The constant need for reloading was the main blocker in redesigning how Dehacked strings get inserted. Currently they override everything, but IWAD-based Dehacked text shouldn't block PWAD overrides from PWADs' LANGUAGE lumps and instead be treated as coming from an [en default] block.
This also renames the main block from [enu default] to [en default], because it should be treated as the English default for all English locales and not just make it fall through to the base default as it did before.
Having everything lumped together made this a maintenance hassle because it affected how the level has to be stored.
This hasn't been tested yet, so it may not work as intended!
currentUILevel is now primaryLevel.
For ZScript, currentVMLevel was added. This is also exported as 'level' and will change as needed.
This also means that no breaking deprecations will be needed in the future, because in order to sandbox a level only 4 variables need to be handled: level, players, playeringame and consoleplayer.
The remaining global variables are not relevant for the level state.
The static 'level' has been mostly removed from the code except some places that still need work.
I think these were the last two still missing it, all remaining uses of the global level variable are in code that doesn't get run through a level tick and are supposed to access the primary level.
This is supposed to be come the place where all pure play code should be placed, but for that all CVARs and CCMDs and other things that do not directly handle play data should be taken out to make code reviewing easier. These now get collected in two separate files, g_cvars.cpp and g_dumpinfo.cpp respectively.
The sole ZScript property in here has also been moved - to thingdef_properties.cpp.
- moved parts of the render setup out of the separate render functions.
Things like particle and polyobject linking were duplicated several times for rendering different things in different renderers.
These things only need to be set up once before the renderer is started so it makes a lot more sense to consolidate them into one place outside the actual rendering code.
There is no need to do this deep inside the renderer where it required code duplication and made it problematic to execute on multiple levels.
This is now being done before and after the top level call into the renderer in d_main.cpp.
This also serializes the interpolator itself to avoid problems with the Serialize functions adding the interpolations into the list which can only work with a single global instance.
Since currently there is only one level, this will obvciously only run once on that level for the time being.
This is mainly used for CCMDs and CVARs which either print some diagnostics or change some user-settable configuration.
Doing this intermingled with the thinkers is highly unsafe because there are absolutely no guarantees about order of execution.
Effectively it ran these commands right in the middle of the playsim which could cause all sorts of synchronization issues, because CCMDs are part of the UI, not the playsim.
- pass a const string to AddCommandString.
This function manipulated the input buffer, leading to all sorts of code contortions to make sure that the passed parameter is clean for that.
This function will now create a copy of the passed parameter which it can manipulate without complicating its calling code.
# Conflicts:
# src/c_dispatch.cpp
This also changes the action special interface to pass a Level parameter to the separate functions and makes a few other minor adjustments to the polyobject code.
This should be less of a drag on the playsim than having each light a separate actor. A quick check with ZDCMP2 showed that the light processing time was reduced to 1/3rd from 0.5 ms to 0.17 ms per tic.
It's also one native actor class less.
The mod which prompted me to add this is "The Chosen" which is a Dehacked-based TC and repurposes many original actors for something entirely different.
The stock lights are not usable for this and would make it impossible to add a GAMEINFO lump to it because then there is no way to disable loading of lights in the startup screen.
This was done to make reviewing easier, again because it is virtually impossible to search for the operators in the code.
Going through this revealed quite a few places where texture animations were on but shouldn't and even more places that did not check PASLVERS, although they were preparing some paletted rendering.
- moved the ALTHUDCF parser PClass::StaticInit, so that it gets done right after creating the actor definitions.
All left to do is not to reallocate the AltHud object for each frame but store it in a better suited place.
For this to work the 2D mode has to be properly set and unset at the right places so that no double mapping occurs and no render operation can happen while in 2D mode.
# Conflicts:
# src/d_main.cpp
# src/v_video.h
This code was written when the window wasn't resizable and didn't actually manage to restore it before. With today's changes this design flaw caused totally incorrect results.
Like Linux and macOS this will only support borderless fullscreen in the active desktop resolution now, which is what modern systems need.
The list of discrete resolutions has been removed as it makes no sense anymore with a fixed video mode - all the other scaling options remain active, though.
Remaining object(s) led to a potential crash on the next garbage collection cycle
Assertion failure was triggered during restarting in Debug configuration
This is better be made part of the 2D interface.
That would have been done long ago if it hadn't been for the totally incompatible way this was handled by the purely paletted software renderer.
Now with that out of the way there is no point keeping this code this deeply embedded in the renderer.
Lots of this was still laid out for DirectDraw. This removes most of Begin2D so that it can be done more cleanlz.
Note that this commit renders weapon sprites and screen blends incorrectly. Those will be fixed in an upcoming commit.
- with renderers freely switchable, some shortcuts in the 3D floor code had to be removed, because now the hardware renderer can get FF_THISINSIDE-flagged 3D floors.
- changed handling of attenuated lights in the legacy renderer to be adjusted when being rendered instead of when being spawned. For the software renderer the light needs to retain its original values.
This does not work with a setup where the same backend is driving both renderers.
Most of this is now routed through 'screen', and the decision between renderers has to be made inside the actual render functions.
The software renderer is still driven by a thin opaque interface to keep it mostly an isolated module.
This was originally invented to fix the sprite offsets for the hardware renderer.
Changed it so that it doesn't override the original offsets but acts as a second set.
A new CVAR has been added to allow controlling the behavior per renderer.
No more locking insanity! :)
There are no locking counters or other saveguards here that would complicate the implementation because there's precisely two places where this buffer must be locked - the RenderView functions of the regular and poly SW renderer which cannot be called recursively.
In its current form this is quite useless. What's really needed is to require a lock on the RenderBuffer for the 3D scene, but since this is not needed for the 2D stuff anymore it can be done far simpler.
This was done mainly to reduce the amount of occurences of the word FTexture but it immediately helped detect two small and mostly harmless bugs that were found due to the stricter type checks.
When running in a confined environment (such as a snap) it may not be
possible to write to directories such as ~/.config. By using the $HOME
variable instead of the '~' shortcut, the confined environment can pass
an alternative 'home' directory with write privelges.
I have only changed this for posix/unix and haven't touched code for
MacOS, as I don't know if that behaves differently
Since this is a non-standard function it's better kept to as few places as possible, so now DirEntryExists returns an additional flag to say what type an entry is and is being used nearly everywhere where stat was used, excluding a few low level parts in the POSIX code.
- now that the frame buffer stores its render time, the 'ms' return from I_GetTimeFrac is not needed anymore, we may just as well use the globally stored value instead.
The only feature this value was ever used for was texture warping.
Since this calls I_WaitVBL, which resets the frame time, it was essentially just like calling a real-time timer anyway and nothing in it required a specific 0-timepoint.
The same applies to the ZScript interface. All it needs is a millisecond-precise timer with no semantics attached.
* store the frame time in the current screen buffer from where all render code can access it.
* replace some uses of I_MSTime with I_FPSTime, because they should not use a per-frame timer. The only one left is the wipe code but even this doesn't look like it needs either a per-frame timer or a timer counting from the start of the playsim.
- moved timer definitions into their own header/source files. d_main is not the right place for this.
- removed some leftover cruft from the old timer code.
- Added: d_main.cpp now searches for "gzdoom_optional_assets.pk3" - this can be changed in version.h for fork authors.
- Updated forum links to point to ZDoom.org.
The old code went through a list of predefined file names and looked each of them up in a list of predefined directories until it found a match. This made it nearly impossible to add custom IWAD support because the list of valid file names could not be extended.
This has now been switched around to run a scan for matching files on each given directory. With this approach it can look for *.iwad and *.ipk3 as IWAD extensions as well and read an IWADINFO out of these files that can be added to the internal list of IWADs, making it finally possible to define custom IWADs without having to add them to the internal list.
(This isn't fully tested yet so some errors may still occur.)
This fixes two issues:
* timer related texture animations are not being recreated multiple times if a scene renders multiple viewpoints (e.g. camera textures or portals.)
* interpolation is smoother when maps have a high think time of multiple milliseconds. A good map to see the difference would be ZDCMP2 which has a think time of 4-5 milliseconds. With the timer taken in real time after the thinkers have run and VSync on this resulted in alternating time slices of 11 and 21 ms between frame interpolations instead of an even 16 as should be done for smooth 60 fps because roughly every second frame was offset by those 5 ms.
It now works the following way:
(0) - Force off (ZDoom defaults)
(1) - Force on (Doom defaults)
(2) - Auto off (Prefer ZDoom defaults - if DEHACKED is detected with no ZSCRIPT it will turn on) (default)
(3) - Auto on (Prefer Doom defaults - if DECORATE is detected with no ZSCRIPT it will turn off)
For some files that had the Doom Source license attached but saw heavy external contributions over the years I added a special note to license all original ZDoom code under BSD.
This is to ensure that the Class pointer can be set right on creation. ZDoom had always depended on handling this lazily which poses some problems for the VM.
So now there is a variadic Create<classtype> function taking care of that, but to ensure that it gets used, direct access to the new operator has been blocked.
This also neccessitated making DArgs a regular object because they get created before the type system is up. Since the few uses of DArgs are easily controllable this wasn't a big issue.
- did a bit of optimization on the bots' decision making whether to pick up a health item or not.
See https://forum.drdteam.org/viewtopic.php?t=7588
Processing order is now the same as in Chocolate Doom
prBoom+ loads separate files after all WAD lumps though
This makes sense but would change loading sequence existed in ZDoom for years
- activated the RenderOverlay event, now that it can be called from the correct spot, i.e. right after the top level HUD messages are drawn. The system's status output will still be drawn on top of them.