If a step has process tasks, any render or compute
pipelines/renderpasses are **not** run automatically: the idea is the
process tasks need to run the relevant pipelines in a custom manner but
needs the objects to be created.
The recent light changes highlighted that the renderer does not own the
efrags (segfault in qwaq when shutting down my test scene). After
digging through the history of efrag clearing, it turns out that the
renderer never owned them, I just didn't understand the concept of
scenes at the time that I moved efrags into the renderer.
This makes a pretty significant difference: 8-25% for demo1, demo2,
demo3 and bigass1. Many thanks to Nick Alger
(https://stackoverflow.com/questions/5122228/box-to-sphere-collision).
It took me a moment to recognize that it's the Minkowski difference (and
maybe GJK?). Of course, I adapted it for simd.
This eliminates the O(N^2) (N = map leaf count) operation of finding
visible lights and will later allow for finer culling of the lights as
they can be tested against the leaf volume (which they currently are
not as this was just getting things going). However, this has severely
hurt ad_tears' performance (I suspect due to the extreme number of
leafs), but the speed seems to be very steady. Hopefully, reconstructing
the vis clusters will help (I imagine it will help in many places, not
just lights).
I guess I wasn't sure how to find all the allocated entities from within
the registry, but it turned out to be trivial. This takes care of leaked
static entities (and, in a later commit, leaked light entities, which is
how I found the problem).
The grid calculations are modified from those of Inigo Quilez
(https://iquilezles.org/articles/filterableprocedurals/), but give very
nice results: when thin enough, the lines fade out nicely instead of
producing crazy moire patterns. Though currently disabled, the default
planes are the xy, yz and zx planes with colored axes.
Based on the article
(https://developer.nvidia.com/content/depth-precision-visualized), this
should give nice precision behavior, and removes the need to worry about
large maps getting clipped. If I'm doing my math correctly, despite
being reversed, near precision is still crazy high. And (thanks to the
reversed depth) about a quarter of a unit (for near clip of 4) out at 1M
unit distance.
This seems to be more for legacy X11 (ie, without fixes etc), but
fullscreen really shouldn't affect grabbing directly (rather, it should
be up to the client whether grabbing (and thus warping) is enabled at
all.
con_linewidt starts out as 0, which leads to bad results for the initial
widths of input lines and later calculations. However, really, they
probably shouldn't be using size_t for the width, but this is a nice
quick fix.
Due to doing most of my testing using the demos, I hadn't noticed the
double-draw until flying around with the debug camera (and it showed as
a weird shimmer behind the sky layers).
The idea is the UI system can call into the renderer without knowing
anything about the renderer, and the renderer can do what it pleases to
create UI elements in the correct context (passes as a parameter).
Panels can be anchored to a widget in another hierarchy, allowing for
things like cascading menus. They can also be extended via referencing
them by name, allowing for subsystems to add items to an already panel
(eg, extending a menu).
This prevent the layout system from repositioning the text view and thus
breaking text-shaping. Now Tengwar Telcontar looks much more balanced in
the widgets.
This is done only in nq because I am not sure of the consequences to the
rest of quake and I do most of my testing in nq when not using qwaq.
Needed for thousands separators in formatted printing.
The lights debug is from the light splat experiment (this is why I kept
the code), and the bsp debug is based on that. Both currently disabled
for now until I get UI controls in.
SCR_UpdateScreen_Legacy now takes only the screen functions pointer (it
didn't need camera or realtime), and the camera sizzle code has been
moved into one place to make cleaning it up easier (when I get around to
auditing AngleVectors etc).
Skipping the root view (widget) sort of made sense before windows became
separate canvases as there was only the one hierarchy, but doing so
prevented windows (panels) from fitting themselves to their children.
However, now I need to think of a good way of specifying a minimum size
for panels.
WIdgets can't possibly be active when the entire UI is hidden, and
resetting the active widget when hiding the UI helps when the state gets
broken due to widget id conflicts.
The intent is to use them for menus, tooltips and anything else along
those lines, but windows was a good starting point (and puts a border
along the top of the window too).
It's not perfect as the first N expanding children get grown by 1 pixel
regarless of weight, but it's much better than leaving a (possibly quite
large) gap at the edge of the layout.
I'm not sure this is what I want, especially in the long run, but it
does make simple windows much easier to create (and not look broken due
to being specified too small).